Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Outer-approximation

Outer Approximation Decomposition Methods Again, we consider the MINLP (3-112) with convex fix) and g(x). Note that the NLP with binary variables fixed at y... [Pg.68]

Compared to nonlinear branch and bound, the outer approximation algorithm usually requires very few solutions of the MILP and NLP subproblems. This is especially advantageous on problems where the NLPs are large and expensive to solve. Moreover, there are two variations of outer approximation that may be suitable for particular problem types ... [Pg.69]

The extended cuttingplane (ECP) algorithm [Westerlund and Pet-tersson, Computers and Chem. Engng. 19 S131 (1995)] is complementary to GBD. While the lower bounding problem in Pig. 3-62 remains essentially the same, the continuous variables xk are chosen from the MILP solution and the NLP (3-113) is replaced by a simple evaluation of the objective and constraint functions. As a result, only MILP problems [(3-116) plus integer cuts] need be solved. Consequently, the ECP approach has weaker upper bounds than outer approximation and requires more MILP solutions. It has advantages over outer approximation when the NLP (3-113) is expensive to solve. [Pg.69]

The MINLP-problems were implemented in GAMS [7, 8] and solved by the outer approximation/equality relaxation/augmented penalty-method [9] as implemented in DICOPT. The algorithm generates a series of NLP and MILP subproblems, which were solved by the generalized reduced gradient method [10] as implemented in CONOPT and the integrality relaxation based branch and cut method as... [Pg.155]

Outer Approximation/Equality Relaxation/Augmented Penalty... [Pg.156]

Viswanathan, J. and Grossmann, I.E. (1990) A combined penalty function and outer approximation method for MINLP optimization. Comput. Chem. Eng., 14, 769-782. [Pg.161]

Table 9.1 shows how outer approximation, as implemented in the DICOPT software, performs when applied to the process selection model in Example 9.3. Note that this model does not satisfy the convexity assumptions because its equality constraints are nonlinear. Still DICOPT does find the optimal solution at iteration 3. Note, however, that the optimal MILP objective value at iteration 3 is 1.446, which is not an upper bound on the optimal MINLP value of 1.923 because the convexity conditions are violated. Hence the normal termination condition that the difference between upper and lower bounds be less than some tolerance cannot be used, and DICOPT may fail to find an optimal solution. Computational experience on nonconvex problems has shown that retaining the best feasible solution found thus far, and stopping when the objective value of the NLP subproblem fails to improve, often leads to an optimal solution. DICOPT stopped in this example because the NLP solution at iteration 4 is worse (lower) than that at iteration 3. [Pg.370]

Generalized Benders decomposition (GBD), derived in Geoffrion (1972), is an algorithm that operates in a similar way to outer approximation and can be applied to MINLP problems. Like OA, when GBD is applied to models of the form (9.2)-(9.5), each major iteration is composed of the solution of two subproblems. At major iteration K one of these subproblems is NLP(y ), given in Equations (9.6)-(9.7). This is an NLP in the continuous variables x, with y fixed at y The other GBD subproblem is an integer linear program, as in OA, but it only involves the... [Pg.370]

The big-M formulation is often difficult to solve, and its difficulty increases as M increases. This is because the NLP relaxation of this problem (the problem in which the condition yt = 0 or 1 is replaced by yt between 0 and 1) is often weak, that is, its optimal objective value is often much less than the optimal value of the MINLP. An alternative to the big-M formulation is described in Lee and Grossman (2000) using an NLP relaxation, which often has a much tighter bound on the optimal MINLP value. A branch-and-bound algorithm based on this formulation performed much better than a similar method applied to the big-M formulation. An outer approximation approach is also described by Lee and Grossmann (2000). [Pg.372]

Duran, M. A. and I. E. Grossmann. An Outer Approximation Algorithm for a Class of Mixed—Integer Nonlinear Programs. Math Prog 36 307-339, (1986). [Pg.373]

Ouchterlony technique, 9 754 Ouricouri wax, 26 210—211 Ouricury, in mascara, 7 862 Outer approximations (OA), discrete optimization via, 26 1024 Outer—Helmholtz plane (OHP), 3 419 Outer-sphere (OS) mechanism,... [Pg.659]

Outer Approximation with Equality Relaxation, OA/ER (Kocis and Grossmann,... [Pg.112]

The Outer Approximation OA addresses problems with nonlinear inequalities, and creates sequences of upper and lower bounds as the GBD, but it has the distinct feature of using primal information, that is the solution of the upper bound problems, so as to linearize the objective and constraints around that point. The lower bounds in OA are based upon the accumulation of the linearized objective function and constraints, around the generated primal solution points. [Pg.113]

The Generalized Outer Approximation GOA extends the OA to the MINLP problems of types (6.1),(6.2) and introduces exact penalty functions. [Pg.113]

In the subsequent sections, we will concentrate on the algorithms that are based on decomposition and outer approximation, that is on 1., 3., 5., 6., 7., and 8.. This focus of our study results from the existing evidence of excellent performance of the aforementioned decomposition-based and outer approximation algorithms compared to the branch and bound methods and the feasibility approach. [Pg.113]

In the sequel, we will discuss the vl-GBD for class 1 problems since this by itself defines an interesting mathematical structure for which other algorithms (e.g., Outer Approximation) has been developed. [Pg.126]

Remark 3 Note that if in addition to the separability of x and y, we assume that y participates linearly (i.e., conditions for Outer Approximation algorithm), then we have... [Pg.128]

Duran and Grossmann (1986a 1986b) proposed an Outer Approximation OA algorithm for the following class of MINLP problems ... [Pg.144]

The basic idea in OA is similar to the one in GBD that is, at each iteration we generate an upper bound and a lower bound on the MINLP solution. The upper bound results from the solution of the problem which is problem (6.13) with fixed y variables (e.g., y = yk). The lower bound results from the solution of the master problem. The master problem is derived using primal information which consists of the solution point xk of the primal and is based upon an outer approximation (linearization) of the nonlinear objective and constraints around the primal solution xk. The solution of the master problem, in addition to the lower bound, provides information on the next set of fixed y variables (i.e., y = yt+ ) to be used in the next primal problem. As the iterations proceed, two sequences of updated upper bounds and lower bounds are generated which are shown to be nonincreasing and nondecreasing respectively. Then, it is shown that these two sequences converge within e in a finite number of iterations. [Pg.145]

The outer approximation of v(y) will be in terms of the intersection of an infinite set of supporting functions. These supporting functions correspond to linearizations of /(x) and g(x) at all xk X. Then, the following conditions are satisfied ... [Pg.148]

Note that the linear supports are the tangents of /(x) at x1,x2,and x3 and that they underestimate the objective function. Also, note that accumulation of these linear underestimating supports results in a better approximation of the objective function /(x) from the outside (i.e., outer approximation). Notice also that the linear underestimators are valid because /(x) is convex inx. [Pg.150]

As we have discussed in the outer approximation of v(y), Duran and Grossmann (1986a) made the assumption that y Y instead of y Y n V and introduced the integer cut constraints which they claimed make their assumption valid. Fletcher and Leyffer (1994), however, presented a counter example which can be stated as follows ... [Pg.152]

Remark 2 The right-hand sides of the first three sets of constraints are the support functions that are represented as outer approximations (or linearizations) at the current solution point xk of the primal problem. If condition Cl is satisfied then these supports are valid underestimators and as a result the relaxed master problem provides a valid lower bound on the global solution of the MINLP problem. [Pg.160]


See other pages where Outer-approximation is mentioned: [Pg.477]    [Pg.68]    [Pg.69]    [Pg.69]    [Pg.69]    [Pg.69]    [Pg.69]    [Pg.156]    [Pg.351]    [Pg.369]    [Pg.369]    [Pg.389]    [Pg.659]    [Pg.64]    [Pg.109]    [Pg.112]    [Pg.112]    [Pg.144]    [Pg.146]    [Pg.148]    [Pg.150]    [Pg.151]   
See also in sourсe #XX -- [ Pg.369 ]




SEARCH



© 2024 chempedia.info