Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization convexity

This criterion resumes all the a priori knowledge that we are able to convey concerning the physical aspect of the flawed region. Unfortunately, neither the weak membrane model (U2 (f)) nor the Beta law Ui (f)) energies are convex functions. Consequently, we need to implement a global optimization technique to reach the solution. Simulated annealing (SA) cannot be used here because it leads to a prohibitive cost for calculations [9]. We have adopted a continuation method like the GNC [2]. [Pg.332]

In the sequel we shall study an optimal control problem. Let C (fl) be a convex, bounded and closed set. Assume that ( < 0 on T for each G. In particular, this condition provides nonemptiness for Kf. Denote the solution of (2.131) by % = introduce the cost functional... [Pg.110]

The resulting numerical prediction for the size-optimal cooling curve is shown in Figure 7.3. It predicts that in order to maximize the final sizes of the S-crystals, the temperature should be held constant for a period at both the start and end of the operation with a convex curve in between. This has the result of reducing both the early and terminal supersaturation levels and so maximizes solute deposition on the S-crystals and their growth rather than that of the A -crystals. Thus, programmed cooling is strictly sub-optimal , but nevertheless remarkably close to the optimum result in this case to be a practical alternative (Jones, 1974). [Pg.198]

All of the interpretations of Theorem 4-11 given in Section 4.7 carry over immediately to the continuous output channel The set of Eqs. (4-104) and (4-105), for finding the optimum input probabilities for a given p become virtually useless, but. is still a convex function of p, sb that p can be optimized numerically. [Pg.240]

Notice that those distribution functions that satisfy Eq. (4-179) still constitute a convex set, so that optimization of the E,R curve is still straightforward by numerical methods. It is to be observed that the choice of an F(x) satisfying a constraint such as Eq. (4-179) defines an ensemble of codes the individual codes in the ensemble will not necessarily satisfy the constraint. This is unimportant practically since each digit of each code word is chosen independently over the ensemble thus it is most unlikely that the average power of a code will differ drastically from the average power of the ensemble. It is possible to combine the central limit theorem and the techniques used in the last two paragraphs of Section 4.7 to show that a code exists for which each code word satisfies... [Pg.242]

Let II II denote the Euclidean norm and define = gk+i gk- Table I provides a chronological list of some choices for the CG update parameter. If the objective function is a strongly convex quadratic, then in theory, with an exact line search, all seven choices for the update parameter in Table I are equivalent. For a nonquadratic objective functional J (the ordinary situation in optimal control calculations), each choice for the update parameter leads to a different performance. A detailed discussion of the various CG methods is beyond the scope of this chapter. The reader is referred to Ref. [194] for a survey of CG methods. Here we only mention briefly that despite the strong convergence theory that has been developed for the Fletcher-Reeves, [195],... [Pg.83]

Figure 15 Comparison of theory and experiment for the fractionation of oligoade-nylates on ion exchange materials, (a) Simulated chromatogram, (b) Observed chromatogram. An example of how theory is being used to attempt to optimize performance of ion exchange materials. The curve in (a) shows the nonlinear gradient development with a convex curvature. (Reproduced with permission of Elsevier Science from Baba, Y., Fukuda, M., and Yoza, N., J. Chromatogr., 458, 385, 1988.)... Figure 15 Comparison of theory and experiment for the fractionation of oligoade-nylates on ion exchange materials, (a) Simulated chromatogram, (b) Observed chromatogram. An example of how theory is being used to attempt to optimize performance of ion exchange materials. The curve in (a) shows the nonlinear gradient development with a convex curvature. (Reproduced with permission of Elsevier Science from Baba, Y., Fukuda, M., and Yoza, N., J. Chromatogr., 458, 385, 1988.)...
Now consider the influence of the inequality constraints on the optimization problem. The effect of inequality constraints is to reduce the size of the solution space that must be searched. However, the way in which the constraints bound the feasible region is important. Figure 3.10 illustrates the concept of convex and nonconvex regions. [Pg.42]

Whilst Example 3.1 is an extremely simple example, it illustrates a number of important points. If the optimization problem is completely linear, the solution space is convex and a global optimum solution can be generated. The optimum always occurs at an extreme point, as is illustrated in Figure 3.12. The optimum cannot occur inside the feasible region, it must always be at the boundary. For linear functions, running up the gradient can always increase the objective function until a boundary wall is hit. [Pg.44]

The addition of inequality constraints complicates the optimization. These inequality constraints can form convex or nonconvex regions. If the region is nonconvex, then this means that the search can be attracted to a local optimum, even if the objective function is convex in the case of a minimization problem or concave in the case of a maximization problem. In the case that a set of inequality constraints is linear, the resulting region is always convex. [Pg.54]

As shown in Fig. 3-53, optimization problems that arise in chemical engineering can be classified in terms of continuous and discrete variables. For the former, nonlinear programming (NLP) problems form the most general case, and widely applied specializations include linear programming (LP) and quadratic programming (QP). An important distinction for NLP is whether the optimization problem is convex or nonconvex. The latter NLP problem may have multiple local optima, and an important question is whether a global solution is required for the NLP. Another important distinction is whether the problem is assumed to be differentiable or not. [Pg.60]

Convex Cases of NLP Problems Linear programs and quadratic programs are special cases of (3-85) that allow for more efficient solution, based on application of KKT conditions (3-88) through (3-91). Because these are convex problems, any locally optimal solution is a global solution. In particular, if the objective and constraint functions in (3-85) are linear, then the following linear program (LP)... [Pg.62]

If the matrix Q is positive semidefinite (positive definite) when projected into the null space of the active constraints, then (3-98) is (strictly) convex and the QP is a global (and unique) minimum. Otherwise, local solutions exist for (3-98), and more extensive global optimization methods are needed to obtain the global solution. Like LPs, convex QPs can be solved in a finite number of steps. However, as seen in Fig. 3-57, these optimal solutions can lie on a vertex, on a constraint boundary, or in the interior. A number of active set strategies have been created that solve the KKT conditions of the QP and incorporate efficient updates of active constraints. Popular methods include null space algorithms, range space methods, and Schur complement methods. As with LPs, QP problems can also be solved with interior point methods [see Wright (1996)]. [Pg.62]

This basic concept leads to a wide variety of global algorithms, with the following features that can exploit different problem classes. Bounding strategies relate to the calculation of upper and lower bounds. For the former, any feasible point or, preferably, a locally optimal point in the subregion can be used. For the lower bound, convex relaxations of the objective and constraint functions are derived. [Pg.66]

Convex hull formulations of MILPs and MINLPs lead to relaxed problems that have much tighter lower bounds. This leads to the examination of far fewer nodes in the branch and bound tree. See Grossmann and Lee, Comput. Optim. Applic. 26 83 (2003) for more details. [Pg.69]

The concept of convexity is useful both in the theory and applications of optimization. We first define a convex set, then a convex function, and lastly look at the role played by convexity in optimization. [Pg.121]

For well-posed quadratic objective functions the contours always form a convex region for more general nonlinear functions, they do not (see tlje next section for an example). It is helpful to construct contour plots to assist in analyzing the performance of multivariable optimization techniques when applied to problems of two or three dimensions. Most computer libraries have contour plotting routines to generate the desired figures. [Pg.134]

Borwein, J. and A. S. Lewis. Convex Analysis and Nonlinear Optimization. Springer, New York (1999). [Pg.142]

In problems in which there are n variables and m equality constraints, we could attempt to eliminate m variables by direct substitution. If all equality constraints can be removed, and there are no inequality constraints, the objective function can then be differentiated with respect to each of the remaining (n — m) variables and the derivatives set equal to zero. Alternatively, a computer code for unconstrained optimization can be employed to obtain x. If the objective function is convex (as in the preceding example) and the constraints form a convex region, then any stationary point is a global minimum. Unfortunately, very few problems in practice assume this simple form or even permit the elimination of all equality constraints. [Pg.266]

The KTC comprise both the necessary and sufficient conditions for optimality for smooth convex problems. In the problem (8.25)-(8.26), if the objective fix) and inequality constraint functions gj are convex, and the equality constraint functions hj are linear, then the feasible region of the problem is convex, and any local minimum is a global minimum. Further, if x is a feasible solution, if all the problem functions have continuous first derivatives at x, and if the gradients of the active constraints at x are independent, then x is optimal if and only if the KTC are satisfied at x. ... [Pg.280]

Many real problems do not satisfy these convexity assumptions. In chemical engineering applications, equality constraints often consist of input-output relations of process units that are often nonlinear. Convexity of the feasible region can only be guaranteed if these constraints are all linear. Also, it is often difficult to tell if an inequality constraint or objective function is convex or not. Hence it is often uncertain if a point satisfying the KTC is a local or global optimum, or even a saddle point. For problems with a few variables we can sometimes find all KTC solutions analytically and pick the one with the best objective function value. Otherwise, most numerical algorithms terminate when the KTC are satisfied to within some tolerance. The user usually specifies two separate tolerances a feasibility tolerance Sjr and an optimality tolerance s0. A point x is feasible to within if... [Pg.281]

Branch and bound (BB) is a class of methods for linear and nonlinear mixed-integer programming. If carried to completion, it is guaranteed to find an optimal solution to linear and convex nonlinear problems. It is the most popular approach and is currently used in virtually all commercial MILP software (see Chapter 7). [Pg.354]

Table 9.1 shows how outer approximation, as implemented in the DICOPT software, performs when applied to the process selection model in Example 9.3. Note that this model does not satisfy the convexity assumptions because its equality constraints are nonlinear. Still DICOPT does find the optimal solution at iteration 3. Note, however, that the optimal MILP objective value at iteration 3 is 1.446, which is not an upper bound on the optimal MINLP value of 1.923 because the convexity conditions are violated. Hence the normal termination condition that the difference between upper and lower bounds be less than some tolerance cannot be used, and DICOPT may fail to find an optimal solution. Computational experience on nonconvex problems has shown that retaining the best feasible solution found thus far, and stopping when the objective value of the NLP subproblem fails to improve, often leads to an optimal solution. DICOPT stopped in this example because the NLP solution at iteration 4 is worse (lower) than that at iteration 3. [Pg.370]


See other pages where Optimization convexity is mentioned: [Pg.93]    [Pg.6]    [Pg.74]    [Pg.189]    [Pg.180]    [Pg.608]    [Pg.761]    [Pg.37]    [Pg.37]    [Pg.51]    [Pg.54]    [Pg.69]    [Pg.61]    [Pg.66]    [Pg.66]    [Pg.68]    [Pg.69]    [Pg.255]    [Pg.88]    [Pg.182]    [Pg.284]    [Pg.327]    [Pg.362]    [Pg.367]    [Pg.369]    [Pg.385]    [Pg.389]   
See also in sourсe #XX -- [ Pg.37 , Pg.47 ]




SEARCH



Convex

Convex Convexity

Convex functions optimization

Convex multi-objective optimization

Convex optimization

© 2024 chempedia.info