Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quadratic convex function

For well-posed quadratic objective functions the contours always form a convex region for more general nonlinear functions, they do not (see tlje next section for an example). It is helpful to construct contour plots to assist in analyzing the performance of multivariable optimization techniques when applied to problems of two or three dimensions. Most computer libraries have contour plotting routines to generate the desired figures. [Pg.134]

Depending on the form of the objective function, the final formulation obtained by replacing the nonlinear Eq. (17) by the set of linear inequalities corresponds to a MINLP (nonlinear objective), to a MIQP (quadratic objective) or to a MILP (linear objective). For the cases where the objective function is linear, solution to global optimal solution is guaranteed using currently available software. The same holds true for the more general case where the objective function is a convex function. [Pg.43]

Summarizing At the time point t t-i the optimum of the [quadratic objective function] Zk is sought. The resulting control [input] vector U k) depends on x k—l) and contains all control [input] vectors u%, uf+i,. .., u% which control the process optimally over the interval tk-i, T. Of these control [input] vectors, one implements the vector (which depends on x k — 1)) as input vector for the next interval [t -i, tj. At the next time point a new input vector M +i is determined. This is calculated from the objective function Z +i and is dependent on x k). Therefore, the vector Ui, which is implemented in the interval is dependent on the state vector x k — 1). Hence, the sought feedback law consists of the solution of a convex optimization problem at each time point (k = 1, 2,. .., N). (Translation by the author.)... [Pg.136]

Since Vf(x ) = 0, the algorithm terminates with x = (3, 2f. Note that Dj is precisely the inverse of the Hessian matrix, H(x) 2 convex quadratic objective function /(x). [Pg.2552]

Let II II denote the Euclidean norm and define = gk+i gk- Table I provides a chronological list of some choices for the CG update parameter. If the objective function is a strongly convex quadratic, then in theory, with an exact line search, all seven choices for the update parameter in Table I are equivalent. For a nonquadratic objective functional J (the ordinary situation in optimal control calculations), each choice for the update parameter leads to a different performance. A detailed discussion of the various CG methods is beyond the scope of this chapter. The reader is referred to Ref. [194] for a survey of CG methods. Here we only mention briefly that despite the strong convergence theory that has been developed for the Fletcher-Reeves, [195],... [Pg.83]

Convex Cases of NLP Problems Linear programs and quadratic programs are special cases of (3-85) that allow for more efficient solution, based on application of KKT conditions (3-88) through (3-91). Because these are convex problems, any locally optimal solution is a global solution. In particular, if the objective and constraint functions in (3-85) are linear, then the following linear program (LP)... [Pg.62]

The result is illustrated in Figure 4.11 in which a convex quadratic function is cut by the plane /(x) = k. The convex set R projected on to the xt-x2 plane comprises the boundary ellipse plus its interior. [Pg.123]

EXAMPLE 6.3 APPLICATION OF NEWTON S METHOD TO A CONVEX QUADRATIC FUNCTION... [Pg.199]

Indefinite quadratic programs, in which the constraints are linear and the objective function is a quadratic function that is neither convex nor concave because its Hessian matrix is indefinite. [Pg.383]

The exponential and logarithm functions have clear convex and concave characters, and do not do well in a nearly linear function. The last three functions provide minor corrections on the linear function, and do quite well. It should be pointed out that the first four functions have two arbitrary parameters each, the quadratic has three and the cubic has four. We expect that when the number of parameters increases, we can correlate ever more complicated data. For the cubic equation with three parameters and only six data points, there remain only three degrees of freedom. It is often said that Give me three parameters, and I can fit an elephant, so that it is a greater achievement to fit complicated data with as few parameters as possible, in the spirit of what is called the Occam s Razor. ... [Pg.167]

Let us consider the vapor pressure of water as a continuous function of temperature for the range of 0 to 100 °C. These data are also monotonic increasing and convex. Let us use as training set the data from 20 to 80 °C at 10 °C intervals, which is a set of seven points. Let us we propose a quadratic equation, and the regression result of... [Pg.170]

The objective function /( ) and the inequality constraint g(x) are convex since f(x) is separable quadratic (sum of quadratic terms, each of which is a linear function of xi, x2,X3, respectively) and g(x) is linear. The equality constraint h(x) is linear. The primal problem is also stable since v(0) is finite and the additional stability condition (Lipschitz continuity-like) is satisfied since f(x) is well behaved and the constraints are linear. Hence, the conditions of the strong duality theorem are satisfied. This is why... [Pg.84]

Note that the objective function is convex since it has linear and positive quadratic terms. The only nonlinearities come from the equality constraint. By introducing three new variables w1,w2)w3, and three equalities ... [Pg.137]

Usually the optimum lies on a convex or concave region of the response surface which can be approximated by functions with quadratic or cubic terms.)... [Pg.90]

Figure 5 illustrates more generally various cases that can occur for simple quadratic functions of form q x) — JxTHx, for n = 2, where H is a constant matrix. The contour plots display different characteristics when H is (a) positive-definite (elliptical contours with lowest function value at the center) and q is said to be a convex quadratic, (b) positive-semidefinite, (c) indefinite, or (d) negative-definite (elliptical contours with highest function value at the center), and q is a concave quadratic. For this figure, the following matrices are used for those different functions ... [Pg.12]

Convergence properties of most minimization algorithms are analyzed through their application to convex quadratic functions. A general multivariate convex quadratic can be written as... [Pg.28]

Steepest descent is simple to implement and requires modest storage, O(k) however, progress toward a minimum may be very slow, especially near a solution. The convergence rate of SD when applied to a convex quadratic function, as in Eq. [22], is only linear. The associated convergence ratio is no greater than [(k - 1)/(k + l)]4 where k, the condition number, is the ratio of largest to smallest eigenvalues of A ... [Pg.30]

The CG method was originally designed to minimize convex quadratic functions. Through several variations, it has also been extended to the general case.66-72... [Pg.30]

The first iteration in a CG method is the same as in SD, with a step along the current negative gradient vector. Successive directions are constructed differently so that they form a set of mutually conjugate vectors with respect to the (positive-definite) Hessian A of a general convex quadratic function. [Pg.31]

Truncated Newton methods were introduced in the early 1980s111-114 and have been gaining popularity ever since.82-109 110 115-123 Their basis is the following simple observation. An exact solution of the Newton equation at every step is unnecessary and computationally wasteful in the framework of a basic descent method. That is, an exact Newton search direction is unwarranted when the objective function is not well approximated by a convex quadratic and/or the initial point is distant from a solution. Any descent direction will suffice in that case. As a solution to the minimization problem is approached, the quadratic approximation may become more accurate, and more effort in solution of the Newton equation may be warranted. [Pg.43]

If the function f(xv. .., xn) is convex (concave) on squadratic form (Eq. (2)) in s variables is positive definite (negative definite). The quadratic form (Eq. (2)) in s variables for which is positive definite (negative definite) if [18]... [Pg.305]

The convexity of the function ln(mk) with respect to k can be easily verified by building a difference table of n(mk). In the first column of the table (do in Tables 3.1 and 3.2 of Exercise 3.3), the natural logarithms of all moments are reported. Elements in subsequent columns are calculated from the previous ones by subtracting from the element in the same row the element in the row below it. Convexity of ln(mk) is ensured by the positiveness of the elements of the column relative to the second-order differences (i.e. d2). It is also interesting to note that the third-order differences vanish when ln(mk) is a quadratic function of k as for the moments of a log-normal distribution. The log-normal is therefore the... [Pg.56]

The basic problem in nonlinear least squares is finding the values of 0 that minimizes the residual sum of squares which is essentially a problem in optimization. Because the objective functions used in pharmacokinetic pharmacodynamic modeling are of a quadratic nature (notice that Eq. (3.13) is raised to the power 2), they have a convex or curved structure that can be exploited to find an estimate of 0. For example, consider the data shown in Fig. 3.1. Using a 1-compartment model with parameters 0 = (V, CL, volume of distribution (V) can be systematically varied from 100 to 200 L and clearance (CL) can be systematically varied from 2 to 60 L/h. With each parameter combination, the residual sum of squares can be calculated and plotted... [Pg.95]

This formulation is a standard quadratic programming problem for which an analytical solution exists from the corresponding Kuhn—Tucker conditions. Different versions of the objective function are sometimes used, but the quadratic version is appealing theoretically because it allows investor preferences to be convex. [Pg.756]

Quasi-Newton Methods In some sense, quasi-Newton methods are an attempt to combine the best features of the steepest descent method with those of Newton s method. Rec that the steepest descent method performs well during early iterations and always decreases the value of the function, whereas Newton s method performs well near the optimum but requires second order derivative information. Quasi-Newton methods are designed to start like the steepest descent method and finish like Newton s method while using only first order derivative information. The basic idea was originally proposed by Davidon (1959) and subsequently developed by Fletcher and PoweU (1963). An additioneil feature of quasi-Newton methods is that the minimum of a convex quadratic function ctm be found in at most n iterations if exact line searches are used. The basic... [Pg.2551]


See other pages where Quadratic convex function is mentioned: [Pg.28]    [Pg.28]    [Pg.284]    [Pg.307]    [Pg.167]    [Pg.131]    [Pg.217]    [Pg.199]    [Pg.68]    [Pg.304]    [Pg.387]    [Pg.498]    [Pg.170]    [Pg.171]    [Pg.89]    [Pg.42]    [Pg.8]    [Pg.29]    [Pg.34]    [Pg.307]    [Pg.618]    [Pg.56]    [Pg.630]   
See also in sourсe #XX -- [ Pg.28 ]




SEARCH



Convex

Convex Convexity

Convex functional

Quadratic

Quadratic convex

Quadratic functions

© 2024 chempedia.info