Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quadratic function minimization

Algorithm 4.1 Linear system solution or quadratic function minimization when A is symmetric positive definite... [Pg.163]

Algorithm 4.2 Preconditioned Conjugate Gradient (PCC) for symmetric system solution or quadratic function minimization... [Pg.167]

Another method for solving nonlinear programming problems is based on quadratic programming (QP)1. Quadratic programming is an optimization procedure that minimizes a quadratic objective function subject to linear inequality or equality (or both types) of constraints. For example, a quadratic function of two variables x and X2 would be of the general form ... [Pg.46]

This equation is the equivalent of Eq. (9-12) for the induced dipole model but has one important difference. Equation (9-13), the derivative of Eq. (9-12), is linear and standard matrix methods can be used to solve for the p. because Eq. (9-12) is a quadratic function of p , while Eq. (9-54) is not a quadratic function of d and thus matrix methods are usually not used to find the Drude particle displacements that minimize the energy. [Pg.239]

In this example, we minimize a simple quadratic function fix) = x2 — x that is illustrated in Figure E5.1a using one iteration of each of the methods presented in Section 5.3. [Pg.161]

Another simple optimization technique is to select n fixed search directions (usually the coordinate axes) for an objective function of n variables. Then fix) is minimized in each search direction sequentially using a one-dimensional search. This method is effective for a quadratic function of the form... [Pg.185]

In optimization the matrix Q is the Hessian matrix of the objective function, H. For a quadratic function /(x) of n variables, in which H is a constant matrix, you are guaranteed to reach the minimum of/(x) in n stages if you minimize exactly on each stage (Dennis and Schnabel, 1996). In n dimensions, many different sets of conjugate directions exist for a given matrix Q. In two dimensions, however, if you choose an initial direction s1 and Q, s2 is fully specified as illustrated in Example 6.1. [Pg.187]

For a quadratic function it can be shown that these successive search directions are conjugate. After n iterations (k = n), the quadratic function is minimized. For a nonquadratic function, the procedure cycles again with xn+1 becoming x°. [Pg.195]

If the BFGS algorithm is applied to a positive-definite quadratic function of n variables and the line search is exact, it will minimize the function in at most n iterations (Dennis and Schnabel, 1996, Chapter 9). This is also true for some other updating formulas. For nonquadratic functions, a good BFGS code usually requires more iterations than a comparable Newton implementation and may not be as accurate. Each BFGS iteration is generally faster, however, because second derivatives are not required and the system of linear equations (6.15) need not be solved. [Pg.208]

Huang, H.Y., "A Unified Approach to Quadratically Convergent Algorithms for Function Minimization", JOTA, 1970 6 (3), 269. [Pg.53]

Convergence properties of most minimization algorithms are analyzed through their application to convex quadratic functions. A general multivariate convex quadratic can be written as... [Pg.28]

The CG method was originally designed to minimize convex quadratic functions. Through several variations, it has also been extended to the general case.66-72... [Pg.30]

The restricted step method of the type discussed below were originally proposed by Levenberg and Marquardt [37,38] and extended to minimization algorithms by Goldfeld, Quandt and Trotter [39]. Recently Simons [13] discussed the restricted step method with respect to molecular energy hypersurfaces. The basic idea again is that the energy hypersurface E(x) can reasonably be approximated, at least locally, by the quadratic function... [Pg.259]

Subroutine GRQP minimizes the quadratic function S 6) of Eq. (6.3-7) over the feasible region... [Pg.103]

It is important to underline the fact that, in the case where A is a linear operator, where D and M are Hilbert spaces, and where s(m) is a quadratic functional, the solution of the minimization problem (2.44) is unique. Note that the quadratic functional is a functional 17(111) with the property... [Pg.44]

The Newton-Raphson approach is another minimization method.f It is assumed that the energy surface near the minimum can be described by a quadratic function. In the Newton-Raphson procedure the second derivative or F matrix needs to be inverted and is then usedto determine the new atomic coordinates. F matrix inversion makes the Newton-Raphson method computationally demanding. Simplifying approximations for the F matrix inversion have been helpful. In the MM2 program, a modified block diagonal Newton-Raphson procedure is incorporated, whereas a full Newton-Raphson method is available in MM3 and MM4. The use of the full Newton-Raphson method is necessary for the calculation of vibrational spectra. Many commercially available packages offer a variety of methods for geometry optimization. [Pg.723]

In order to determine scaling factor and thereby the step length, the quadratic function has to be minimized with respect to... [Pg.1097]

Equation (3.11) is sometimes called the objective function, although it should be stressed that there are many different types of objective functions, such as extended least squares. For purposes of this book, an objective function will be defined as any quadratic function that must be minimized to obtain a set of parameter estimates. For this chapter the focus will be on the residual sum of squares as the objective function, although in later chapters more complex objective functions may be considered. [Pg.94]

Newton s method is based on using a truncated Taylor series to approximate fix) by a quadratic function and then minimizing this quadratic approximation. Thus, near any point Xk... [Pg.189]

In these methods, also known as quasi-Newton methods, the approximate Hessian is improved (updated) based on the results in previous steps. For the exact Hessian and a quadratic surface, the quasi-Newton equation = HAq and its analogue H Ag - = Aq - must hold (where Ag - = g - g and similarly for Aq - ). These equations, which have only n components, are obviously insufficient to determine the n(n + l)/2 independent components of the Hessian or its inverse. Therefore, the updating is arbitrary to a certain extent. It is desirable to have an updating scheme that converges to the exact Hessian for a quadratic function, preserves the quasi-Newton conditions obtained in previous steps, and—for minimization—keeps the Hessian positive definite. Updating can be performed on either F or its inverse, the approximate Hessian. In the former case repeated matrix inversion can be avoided. All updates use dyadic products, usually built... [Pg.2336]

The optimization problem in Eq. (5.146) is a standard situation in optimization, that is, minimization of a quadratic function with linear constraints and can be solved by applying Lagrangian theory. From this theory, it follows that the weight vector of the decision function is given by a linear combination of the training data and the Lagrange multiplier a by... [Pg.199]

Another way to interpret the procedure described is that the curvature along an arbitrary direction, in the surface a = r -f- wr, is a quadratic function of the values of i and m. Diagonalizing this quadratic form, subject to the constraint that a is a unit vector, is mathematically equivalent to the minimization/maximization of k. Thus, the extremal curvatures, Ka and Kb are determined by the extremal values of I and w, which we denote as I and m. For directions on the surface close to these extremal values, the expansion of the curvature as a function of ( — ) and (m — m ) has no linear terms since... [Pg.38]

This strategy searches for the d that minimizes the quadratic function... [Pg.121]

If Gi is unavailable and only its approximation, Bi, is known, this strategy searches for the d, that minimizes the quadratic function... [Pg.121]

In the BzzMath library, the Solve function that uses an object from the BzzMatrixSparseSymmetricLocked class as its first argument solves a linear system or the equivalent minimization of a quadratic function via the CG method. [Pg.165]


See other pages where Quadratic function minimization is mentioned: [Pg.54]    [Pg.54]    [Pg.115]    [Pg.81]    [Pg.153]    [Pg.210]    [Pg.163]    [Pg.190]    [Pg.129]    [Pg.73]    [Pg.34]    [Pg.104]    [Pg.87]    [Pg.87]    [Pg.564]    [Pg.304]    [Pg.288]    [Pg.104]    [Pg.2549]    [Pg.536]    [Pg.100]    [Pg.198]    [Pg.307]    [Pg.151]    [Pg.162]   
See also in sourсe #XX -- [ Pg.132 , Pg.187 ]




SEARCH



Function minimization

Minimizing functional

Quadratic

Quadratic functions

© 2024 chempedia.info