Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimizing Quadratic Functions

Data fitting is a typical example of an optimization problem where the parameters enter the function in a quadratic fashion. Consider for example the problem of determining a set of force field partial charges 2 t y minimizing an error function measuring the fit to the electrostatic potential sampled at a number of points surrounding the molecule (Section 2.2.6). [Pg.381]

The best set of parameters is determined by setting all the first derivatives of ErrF to zero. [Pg.382]

By rearrangement, this gives a set of coupled linear equations. [Pg.382]

These N equations with N unknowns can also be written in a matrix-vector notation. [Pg.382]

The formal solution can be obtained by multiplying with the inverse X matrix. [Pg.382]


The simultaneous optimization of the LCAO-MO and Cl coefficients performed within an MCSCF calculation is a quite formidable task. The variational energy functional is a quadratic function of the Cl coefficients, and so one can express the stationary conditions for these variables in the secular form ... [Pg.491]

Optimization of Unconstrained Olnective Assume the objective Func tion F is a function of independent variables i = r. A computer program, given the values for the independent variables, can calculate F and its derivatives with respect to each Uj. Assume that F is well approximated as an as-yet-unknown quadratic function in u. [Pg.485]

The experimental designs discussed in Chapters 24-26 for optimization can be used also for finding the product composition or processing condition that is optimal in terms of sensory properties. In particular, central composite designs and mixture designs are much used. The analysis of the sensory response is usually in the form of a fully quadratic function of the experimental factors. The sensory response itself may be the mean score of a panel of trained panellists. One may consider such a trained panel as a sensitive instrument to measure the perceived intensity useful in describing the sensory characteristics of a food product. [Pg.444]

Another method for solving nonlinear programming problems is based on quadratic programming (QP)1. Quadratic programming is an optimization procedure that minimizes a quadratic objective function subject to linear inequality or equality (or both types) of constraints. For example, a quadratic function of two variables x and X2 would be of the general form ... [Pg.46]

Another simple optimization technique is to select n fixed search directions (usually the coordinate axes) for an objective function of n variables. Then fix) is minimized in each search direction sequentially using a one-dimensional search. This method is effective for a quadratic function of the form... [Pg.185]

In optimization the matrix Q is the Hessian matrix of the objective function, H. For a quadratic function /(x) of n variables, in which H is a constant matrix, you are guaranteed to reach the minimum of/(x) in n stages if you minimize exactly on each stage (Dennis and Schnabel, 1996). In n dimensions, many different sets of conjugate directions exist for a given matrix Q. In two dimensions, however, if you choose an initial direction s1 and Q, s2 is fully specified as illustrated in Example 6.1. [Pg.187]

The application of the standard nonlinear programming techniques of constrained optimization on analyzing the mean and variance response surfaces has been investigated by Del Castillo and Montgomery [34]. These techniques are appropriate since both the primary and secondary responses are usually quadratic functions. [Pg.40]

The trust radius h reflects our confidence in the SO model. For highly anharmonic functions the trust region must be set small, for quadratic functions it is infinite. Clearly, during an optimization we must be prepared to modify h based on our experience with the function. We return to the problem of updating the trust radius later. [Pg.304]

Fig. 2.17 The potential energy of a diatomic molecule near the equilibrium geometry is approximately a quadratic function of the bond length. Given an input structure (i.e. given the bond length qi), a simple algorithm would enable the bond length of the optimized structure to be found in one step, if the function were strictly quadratic... Fig. 2.17 The potential energy of a diatomic molecule near the equilibrium geometry is approximately a quadratic function of the bond length. Given an input structure (i.e. given the bond length qi), a simple algorithm would enable the bond length of the optimized structure to be found in one step, if the function were strictly quadratic...
Newton-type. Finally, we come to those algorithms which depend on a knowledge of A and A l (the Newton-type algorithms). If we are dealing with quadratic functions, then once we know A l it follows immediately from equation (22) that we can reach the minimum in just one step, so that we need not trouble about directions of descent. However, if the function is not quadratic, then the problem of optimal directions again becomes... [Pg.46]

Therefore, the controller is a linear time-invariant controller, and no online optimization is needed. Linear control theory, for which there is a vast literature, can equivalently be used in the analysis or design of unconstrained MPC (Garcia and Morari, 1982). A similar result can be obtained for several MPC variants, as long as the objective function in Eq. (4). remains a quadratic function of Uoptfe+ -iife and the process model in Eq. (22) remains linear in Uoptfe+f-ife. Incidentally, notice that the appearance of the measured process output y[ ] in Eq. (22) introduces the measurement information needed for MPC to be a feedback controller. This is in the spirit of classical hnear optimal control theory, in which the controlled... [Pg.144]

The Newton-Raphson approach is another minimization method.f It is assumed that the energy surface near the minimum can be described by a quadratic function. In the Newton-Raphson procedure the second derivative or F matrix needs to be inverted and is then usedto determine the new atomic coordinates. F matrix inversion makes the Newton-Raphson method computationally demanding. Simplifying approximations for the F matrix inversion have been helpful. In the MM2 program, a modified block diagonal Newton-Raphson procedure is incorporated, whereas a full Newton-Raphson method is available in MM3 and MM4. The use of the full Newton-Raphson method is necessary for the calculation of vibrational spectra. Many commercially available packages offer a variety of methods for geometry optimization. [Pg.723]

Several features of the optimization problem are apparent in Figure 19.1(a). The model is nonlinear with respect to parameters nevertheless, the objective function is well behaved near the solution where it can be approximated by a quadratic function. The contours projected onto the base of the plot have an elliptical shape. The major axis of the ellipse does not lie along either axis. [Pg.365]

Therefore, lines of constant objective function are approximately quadratic functions as shown in Figure 9.6 and we use H as the A matrix. This quadratic approximation using the Hessian matrix evaluated at the optimum is accurate if we are in the neighborhood of the optimal parameter values. We also can obtain order-of-magnitude confidence intervals using the relation... [Pg.279]

The optimization problem in Eq. (5.146) is a standard situation in optimization, that is, minimization of a quadratic function with linear constraints and can be solved by applying Lagrangian theory. From this theory, it follows that the weight vector of the decision function is given by a linear combination of the training data and the Lagrange multiplier a by... [Pg.199]

In order to optimize acceptance, subject to constraints on sensory levels, we turn the problem into a straightforward optimization problem Maximize a quadratic function (viz., liking) subject to ingredient constraints on the concentrations, and subject to linear constraints (viz., sensory characteristics). [Pg.43]

The nonzero vectors dj,. . . , dj are said to be conjugate with respect to the positive definite matrix H if they are linearly independent and d Hdy = 0 for i A j. A method that generates such directions when applied to a quadratic function with Hessian matrix H is called a conjugate direction method. These methods will locate the minimum of a quadratic function in a finite number of iterations, and they can also be applied iteratively to optimize nonquadratic functions. [Pg.2552]

If U were accurately a quadratic function of the coordinates in the region near (A l, y,), then the second partial derivatives (the elements of the Hessian matrix) would be constants in this region, and the subscript 1 on the second partials would be unnecessary. Accurate ab initio SCF calculation of the second derivatives is very time-consuming, so one usually uses a quasi-Newton method, meaning that one starts with an approximation for the Hessian and improves this approximation as the geometry optimization proceeds. We therefore write... [Pg.535]

It is possible to optimize a large-scale unconstrained system only if the function can be reasonably approximated by a quadratic function. [Pg.154]

In order to bypass the need for expert opinion, the concept of Pareto optimality is used. This concept is widely used in multiobjective optimization and can be illustrated through a simple example. Consider, as in Goldberg (1989), these two quadratic functions ... [Pg.2027]

Gradient methods discussed above use a quadratic function (energy, gradient and approximate Hessian) to model the energy surface near the transition state. Distance-weighted interpolants provide a more flexible functional form that can interpolate arbitrarily spaced points with a smooth differentiable function. For a gradient-based optimization, the Shepard interpolation functions seem appropriate... [Pg.277]


See other pages where Optimizing Quadratic Functions is mentioned: [Pg.381]    [Pg.381]    [Pg.381]    [Pg.381]    [Pg.381]    [Pg.381]    [Pg.2341]    [Pg.80]    [Pg.415]    [Pg.137]    [Pg.6]    [Pg.168]    [Pg.272]    [Pg.35]    [Pg.353]    [Pg.25]    [Pg.216]    [Pg.2341]    [Pg.2551]    [Pg.67]    [Pg.68]    [Pg.100]    [Pg.198]    [Pg.191]    [Pg.307]    [Pg.278]    [Pg.264]   


SEARCH



Optimization function

Optimization functional

Quadratic

Quadratic functions

© 2024 chempedia.info