Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Convex functions optimization

This criterion resumes all the a priori knowledge that we are able to convey concerning the physical aspect of the flawed region. Unfortunately, neither the weak membrane model (U2 (f)) nor the Beta law Ui (f)) energies are convex functions. Consequently, we need to implement a global optimization technique to reach the solution. Simulated annealing (SA) cannot be used here because it leads to a prohibitive cost for calculations [9]. We have adopted a continuation method like the GNC [2]. [Pg.332]

All of the interpretations of Theorem 4-11 given in Section 4.7 carry over immediately to the continuous output channel The set of Eqs. (4-104) and (4-105), for finding the optimum input probabilities for a given p become virtually useless, but. is still a convex function of p, sb that p can be optimized numerically. [Pg.240]

The concept of convexity is useful both in the theory and applications of optimization. We first define a convex set, then a convex function, and lastly look at the role played by convexity in optimization. [Pg.121]

Depending on the form of the objective function, the final formulation obtained by replacing the nonlinear Eq. (17) by the set of linear inequalities corresponds to a MINLP (nonlinear objective), to a MIQP (quadratic objective) or to a MILP (linear objective). For the cases where the objective function is linear, solution to global optimal solution is guaranteed using currently available software. The same holds true for the more general case where the objective function is a convex function. [Pg.43]

As shown above, the objective function and the constraints generally involve linear fractional and bilinear terms corresponding to the two summation terms, while the last term h x, y, z) is assumed to correspond to a convex function. This type of problem arises, for instance, in the optimization of heat-exchanger... [Pg.221]

Convexity is a fundamental notion In. the theory of optimization. Gonve.xity can be defined to apply to functions that are not differentiable such as cases B and C in Figure 8.42. We say f ix) is a convex function if... [Pg.263]

Last three terms include average periodical set-up cost, purchase cost, and inventory cost. The cost function is convex function. Hence, the Q value based on the first derivative of the expression wiU be the global optimum. In other words, Q value that satisfies AAC(Q) = 0 is the optimal value denoted as Q known as EOQ. The EOQ formulation is ... [Pg.22]

The following algorithm find the (x, y) values that will give the optimal solution to this problem. The reasoning behind the algorithm is the fact that the objective function can be represented as two separate functions. Each function is a convex function that is guaranteed to have a global minimum point. [Pg.53]

Nonlinear optimization can be treated in special case if the objective function is convex (concave) and the feasible regimi is convex (cmicave). The most notable property is that a local minimum of a convex function on a convex feasible regimi is also a global minimum. Thanks to this, special optimization procedures can be developed. To note that linear functions are convex. [Pg.931]

Analytical studies for choosing the parameters of due date rules include [88] and [108]. Seidmann and Smith [88] consider the CON due date policy in a multi-machine environment under the objective of minimizing the weighted sum of earliness, tardiness and lead time penalty (the same objective function is also considered in [87]). They assume that the shop is using a priority discipline for sequencing, such as FCFS or EDD, and the probability distribution of the flow time is known and is common to all jobs. They show that the optimal lead time is a unique minimum point of strictly convex functions can be found by simple numerical search. [Pg.520]

Dellaert studies two lead time policies, CON and DEL, where DEL considers the probability distribution of the flow time in steady-state while quoting lead times. He models the problem as a continuous-time Markov chain, where the states are denoted by (n, 5)=(number of jobs, state of the machine). Interarrival, service and setup times are assumed to follow the exponential distribution, although the results can also be generalized to other distributions, such as Erlang. For both policies, he derives the pdf of the flow time, and relying on the results in [88] (the optimal lead time is a unique minimum of strictly convex functions), he claims that the optimal solution can be found by binary search. [Pg.532]

We can show that Equation 5.55 is a convex function. Hence, setting its partial derivatives with respect to x and y equal to zero, will provide the optimal location, denoted by (x, y ), as given in the following ... [Pg.267]

In the sequel we shall study an optimal control problem. Let C (fl) be a convex, bounded and closed set. Assume that ( < 0 on T for each G. In particular, this condition provides nonemptiness for Kf. Denote the solution of (2.131) by % = introduce the cost functional... [Pg.110]

Notice that those distribution functions that satisfy Eq. (4-179) still constitute a convex set, so that optimization of the E,R curve is still straightforward by numerical methods. It is to be observed that the choice of an F(x) satisfying a constraint such as Eq. (4-179) defines an ensemble of codes the individual codes in the ensemble will not necessarily satisfy the constraint. This is unimportant practically since each digit of each code word is chosen independently over the ensemble thus it is most unlikely that the average power of a code will differ drastically from the average power of the ensemble. It is possible to combine the central limit theorem and the techniques used in the last two paragraphs of Section 4.7 to show that a code exists for which each code word satisfies... [Pg.242]

Let II II denote the Euclidean norm and define = gk+i gk- Table I provides a chronological list of some choices for the CG update parameter. If the objective function is a strongly convex quadratic, then in theory, with an exact line search, all seven choices for the update parameter in Table I are equivalent. For a nonquadratic objective functional J (the ordinary situation in optimal control calculations), each choice for the update parameter leads to a different performance. A detailed discussion of the various CG methods is beyond the scope of this chapter. The reader is referred to Ref. [194] for a survey of CG methods. Here we only mention briefly that despite the strong convergence theory that has been developed for the Fletcher-Reeves, [195],... [Pg.83]

Whilst Example 3.1 is an extremely simple example, it illustrates a number of important points. If the optimization problem is completely linear, the solution space is convex and a global optimum solution can be generated. The optimum always occurs at an extreme point, as is illustrated in Figure 3.12. The optimum cannot occur inside the feasible region, it must always be at the boundary. For linear functions, running up the gradient can always increase the objective function until a boundary wall is hit. [Pg.44]

The addition of inequality constraints complicates the optimization. These inequality constraints can form convex or nonconvex regions. If the region is nonconvex, then this means that the search can be attracted to a local optimum, even if the objective function is convex in the case of a minimization problem or concave in the case of a maximization problem. In the case that a set of inequality constraints is linear, the resulting region is always convex. [Pg.54]

Convex Cases of NLP Problems Linear programs and quadratic programs are special cases of (3-85) that allow for more efficient solution, based on application of KKT conditions (3-88) through (3-91). Because these are convex problems, any locally optimal solution is a global solution. In particular, if the objective and constraint functions in (3-85) are linear, then the following linear program (LP)... [Pg.62]

This basic concept leads to a wide variety of global algorithms, with the following features that can exploit different problem classes. Bounding strategies relate to the calculation of upper and lower bounds. For the former, any feasible point or, preferably, a locally optimal point in the subregion can be used. For the lower bound, convex relaxations of the objective and constraint functions are derived. [Pg.66]

For well-posed quadratic objective functions the contours always form a convex region for more general nonlinear functions, they do not (see tlje next section for an example). It is helpful to construct contour plots to assist in analyzing the performance of multivariable optimization techniques when applied to problems of two or three dimensions. Most computer libraries have contour plotting routines to generate the desired figures. [Pg.134]

In problems in which there are n variables and m equality constraints, we could attempt to eliminate m variables by direct substitution. If all equality constraints can be removed, and there are no inequality constraints, the objective function can then be differentiated with respect to each of the remaining (n — m) variables and the derivatives set equal to zero. Alternatively, a computer code for unconstrained optimization can be employed to obtain x. If the objective function is convex (as in the preceding example) and the constraints form a convex region, then any stationary point is a global minimum. Unfortunately, very few problems in practice assume this simple form or even permit the elimination of all equality constraints. [Pg.266]

The KTC comprise both the necessary and sufficient conditions for optimality for smooth convex problems. In the problem (8.25)-(8.26), if the objective fix) and inequality constraint functions gj are convex, and the equality constraint functions hj are linear, then the feasible region of the problem is convex, and any local minimum is a global minimum. Further, if x is a feasible solution, if all the problem functions have continuous first derivatives at x, and if the gradients of the active constraints at x are independent, then x is optimal if and only if the KTC are satisfied at x. ... [Pg.280]

Many real problems do not satisfy these convexity assumptions. In chemical engineering applications, equality constraints often consist of input-output relations of process units that are often nonlinear. Convexity of the feasible region can only be guaranteed if these constraints are all linear. Also, it is often difficult to tell if an inequality constraint or objective function is convex or not. Hence it is often uncertain if a point satisfying the KTC is a local or global optimum, or even a saddle point. For problems with a few variables we can sometimes find all KTC solutions analytically and pick the one with the best objective function value. Otherwise, most numerical algorithms terminate when the KTC are satisfied to within some tolerance. The user usually specifies two separate tolerances a feasibility tolerance Sjr and an optimality tolerance s0. A point x is feasible to within if... [Pg.281]

The targets for the MPC calculations are generated by solving a steady-state optimization problem (LP or QP) based on a linear process model, which also finds the best path to achieve the new targets (Backx et al., 2000). These calculations may be performed as often as the MPC calculations. The targets and constraints for the LP or QP optimization can be generated from a nonlinear process model using a nonlinear optimization technique. If the optimum occurs at a vertex of constraints and the objective function is convex, successive updates of a linearized model will find the same optimum as the nonlinear model. These calculations tend to be performed less frequently (e.g., every 1-24 h) due to the complexity of the calculations and the process models. [Pg.575]


See other pages where Convex functions optimization is mentioned: [Pg.69]    [Pg.76]    [Pg.308]    [Pg.2543]    [Pg.2555]    [Pg.2642]    [Pg.621]    [Pg.266]    [Pg.281]    [Pg.423]    [Pg.93]    [Pg.180]    [Pg.37]    [Pg.37]    [Pg.54]    [Pg.66]    [Pg.68]    [Pg.69]    [Pg.182]    [Pg.284]    [Pg.362]    [Pg.385]    [Pg.498]    [Pg.598]    [Pg.160]    [Pg.64]   
See also in sourсe #XX -- [ Pg.266 , Pg.280 ]




SEARCH



Convex

Convex Convexity

Convex functional

Optimization convexity

Optimization function

Optimization functional

© 2024 chempedia.info