Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Global convexity

It should also be noted that another modification to reduce the undesirable effects of nonconvexities in the master problem is to apply global convexity tests, followed by a suitable validation of linearizations. One possibility is to apply the test to all linearizations with respect to the current solution vector (y. v ) (Kravanja and Grossmann, 1994). The convexity conditions that have to be verified for the linearizations are as follows ... [Pg.206]

The procedure we have applied does not depend on the (global) convexity of the functional, and we have been able to demonstrate the Gateaux differentiability of the Levy-Lieb functional at all PS-v-representable densities, where this functional is locally convex. It seems plausible that both these functionals are also Frechet differentiable at the same densities, although we have not been able to find a rigorous proof. [Pg.114]

This criterion resumes all the a priori knowledge that we are able to convey concerning the physical aspect of the flawed region. Unfortunately, neither the weak membrane model (U2 (f)) nor the Beta law Ui (f)) energies are convex functions. Consequently, we need to implement a global optimization technique to reach the solution. Simulated annealing (SA) cannot be used here because it leads to a prohibitive cost for calculations [9]. We have adopted a continuation method like the GNC [2]. [Pg.332]

Most strategies hmit themselves to finding a local minimum point in the vicinity of the starting point for the search. Such a strategy will find the global optimum only if the problem has a single minimum point or a set of connected minimum points. A convex problem has only a global optimum. [Pg.485]

It should be noted that it is not sufficient to simply have a convex region in order to ensure that a search can locate the global optimum. The objective function must also be convex if it is to be minimized or concave if it is to be maximized. [Pg.43]

Whilst Example 3.1 is an extremely simple example, it illustrates a number of important points. If the optimization problem is completely linear, the solution space is convex and a global optimum solution can be generated. The optimum always occurs at an extreme point, as is illustrated in Figure 3.12. The optimum cannot occur inside the feasible region, it must always be at the boundary. For linear functions, running up the gradient can always increase the objective function until a boundary wall is hit. [Pg.44]

As shown in Fig. 3-53, optimization problems that arise in chemical engineering can be classified in terms of continuous and discrete variables. For the former, nonlinear programming (NLP) problems form the most general case, and widely applied specializations include linear programming (LP) and quadratic programming (QP). An important distinction for NLP is whether the optimization problem is convex or nonconvex. The latter NLP problem may have multiple local optima, and an important question is whether a global solution is required for the NLP. Another important distinction is whether the problem is assumed to be differentiable or not. [Pg.60]

Convex Cases of NLP Problems Linear programs and quadratic programs are special cases of (3-85) that allow for more efficient solution, based on application of KKT conditions (3-88) through (3-91). Because these are convex problems, any locally optimal solution is a global solution. In particular, if the objective and constraint functions in (3-85) are linear, then the following linear program (LP)... [Pg.62]

If the matrix Q is positive semidefinite (positive definite) when projected into the null space of the active constraints, then (3-98) is (strictly) convex and the QP is a global (and unique) minimum. Otherwise, local solutions exist for (3-98), and more extensive global optimization methods are needed to obtain the global solution. Like LPs, convex QPs can be solved in a finite number of steps. However, as seen in Fig. 3-57, these optimal solutions can lie on a vertex, on a constraint boundary, or in the interior. A number of active set strategies have been created that solve the KKT conditions of the QP and incorporate efficient updates of active constraints. Popular methods include null space algorithms, range space methods, and Schur complement methods. As with LPs, QP problems can also be solved with interior point methods [see Wright (1996)]. [Pg.62]

This basic concept leads to a wide variety of global algorithms, with the following features that can exploit different problem classes. Bounding strategies relate to the calculation of upper and lower bounds. For the former, any feasible point or, preferably, a locally optimal point in the subregion can be used. For the lower bound, convex relaxations of the objective and constraint functions are derived. [Pg.66]

As seen in Fig. 3-59, this problem has local solutions at xe = 2.5 and at xe = 0.8749. The latter is also the global solution with flx°)= —19.7. To find the global solution, we note that all but the— 20%3 term in (3-108) are convex, so we replace this term by a new variable and a linear underestimator within a particular sub-region, i.e.,... [Pg.66]

In summary, the optimum of a nonlinear programming problem is, in general, not at an extreme point of the feasible region and may not even be on the boundary. Also, the problem may have local optima distinct from the global optimum. These properties are direct consequences of nonlinearity. A class of nonlinear problems can be defined, however, that are guaranteed to be free of distinct local optima. They are called convex programming problems and are considered in the following section. [Pg.121]

Analogously, a local maximum is the global maximum of/(x) if the objective function is concave and the constraints form a convex set. [Pg.124]

For example, it is usually impossible to prove that a given algorithm will find the global minimum of a nonlinear programming problem unless the problem is convex. For nonconvex problems, however, many such algorithms find at least a local minimum. Convexity thus plays a role much like that of linearity in the study of dynamic systems. For example, many results derived from linear theory are used in the design of nonlinear control systems. [Pg.127]

In problems in which there are n variables and m equality constraints, we could attempt to eliminate m variables by direct substitution. If all equality constraints can be removed, and there are no inequality constraints, the objective function can then be differentiated with respect to each of the remaining (n — m) variables and the derivatives set equal to zero. Alternatively, a computer code for unconstrained optimization can be employed to obtain x. If the objective function is convex (as in the preceding example) and the constraints form a convex region, then any stationary point is a global minimum. Unfortunately, very few problems in practice assume this simple form or even permit the elimination of all equality constraints. [Pg.266]

The KTC comprise both the necessary and sufficient conditions for optimality for smooth convex problems. In the problem (8.25)-(8.26), if the objective fix) and inequality constraint functions gj are convex, and the equality constraint functions hj are linear, then the feasible region of the problem is convex, and any local minimum is a global minimum. Further, if x is a feasible solution, if all the problem functions have continuous first derivatives at x, and if the gradients of the active constraints at x are independent, then x is optimal if and only if the KTC are satisfied at x. ... [Pg.280]

Many real problems do not satisfy these convexity assumptions. In chemical engineering applications, equality constraints often consist of input-output relations of process units that are often nonlinear. Convexity of the feasible region can only be guaranteed if these constraints are all linear. Also, it is often difficult to tell if an inequality constraint or objective function is convex or not. Hence it is often uncertain if a point satisfying the KTC is a local or global optimum, or even a saddle point. For problems with a few variables we can sometimes find all KTC solutions analytically and pick the one with the best objective function value. Otherwise, most numerical algorithms terminate when the KTC are satisfied to within some tolerance. The user usually specifies two separate tolerances a feasibility tolerance Sjr and an optimality tolerance s0. A point x is feasible to within if... [Pg.281]


See other pages where Global convexity is mentioned: [Pg.16]    [Pg.16]    [Pg.17]    [Pg.16]    [Pg.16]    [Pg.17]    [Pg.96]    [Pg.173]    [Pg.240]    [Pg.173]    [Pg.16]    [Pg.16]    [Pg.17]    [Pg.16]    [Pg.16]    [Pg.17]    [Pg.96]    [Pg.173]    [Pg.240]    [Pg.173]    [Pg.6]    [Pg.304]    [Pg.37]    [Pg.37]    [Pg.51]    [Pg.54]    [Pg.69]    [Pg.60]    [Pg.66]    [Pg.66]    [Pg.69]    [Pg.137]    [Pg.154]    [Pg.134]    [Pg.285]    [Pg.327]    [Pg.362]    [Pg.362]    [Pg.367]    [Pg.382]    [Pg.385]    [Pg.389]   
See also in sourсe #XX -- [ Pg.16 ]

See also in sourсe #XX -- [ Pg.16 ]




SEARCH



Convex

Convex Convexity

© 2024 chempedia.info