Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quasi-Newton updates

For process optimization problems, the sparse approach has been further developed in studies by Kumar and Lucia (1987), Lucia and Kumar (1988), and Lucia and Xu (1990). Here they formulated a large-scale approach that incorporates indefinite quasi-Newton updates and can be tailored to specific process optimization problems. In the last study they also develop a sparse quadratic programming approach based on indefinite matrix factorizations due to Bunch and Parlett (1971). Also, a trust region strategy is substituted for the line search step mentioned above. This approach was successfully applied to the optimization of several complex distillation column models with up to 200 variables. [Pg.203]

Dealing with Z BZ directly has several advantages if n — m is small. Here the matrix is dense and the sufficient conditions for local optimality require that Z BZ be positive definite. Hence, the quasi-Newton update formula can be applied directly to this matrix. Several variations of this basic algorithm... [Pg.204]

Table 1. Convergence in a CASSCF calculation on water, with a DTP basis. The approximate super-CI method was used with and without quasi-Newton update. The active space comprised 8 orbitals (4a12b1, 2b2 in C2v symmetry), yielding 492 CSF s. The Is orbital was inactive. Table 1. Convergence in a CASSCF calculation on water, with a DTP basis. The approximate super-CI method was used with and without quasi-Newton update. The active space comprised 8 orbitals (4a12b1, 2b2 in C2v symmetry), yielding 492 CSF s. The Is orbital was inactive.
C Scalar used in a quasi-Newton update of a Jacobian, Sec, 4.2.6,... [Pg.202]

The computational effort in evaluating the Hessian matrix is significant, and quasi-Newton approximations have been used to reduce this effort. The Wilson-Han-Powell method is an enhancement to successive quadratic programming where the Hessian matrix, (q. ), is replaced by a quasi-Newton update formula such as the BEGS algorithm. Consequently, only first partial derivative information is required, and this is obtained from finite difference approximations of the Lagrangian function. [Pg.2447]

The most efficient methods that use gradients, either numerical or analytic, are based upon quasi-Newton update procedures, such as those described below. They are used to approximate the Hessian matrix H, or its inverse G. Equation (C.4) is then used to determine the step direction q to the nearest minimum. The inverse Hessian matrix determines how far to move along a given gradient component of f, and how the various coordinates are coupled. The success of methods that use approximate Hessians rests upon the observation that when f = 0, an extreme point is reached regardless of the accuracy of H, or its inverse, provided that they are reasonable. [Pg.448]

The most popular quasi-Newton update procedures used at this time... [Pg.448]

In these methods, also known as quasi-Newton methods, the approximate Hessian is improved (updated) based on the results in previous steps. For the exact Hessian and a quadratic surface, the quasi-Newton equation and its analogue = Aq must hold (where - g " and... [Pg.2336]

The development of an SC procedure involves a number of important decisions (1) What variables should be used (2) What equations should be used (3) How should variables be ordered (4) How should equations be ordered (5) How should flexibility in specifications be provided (6) Which derivatives of physical properties should be retained (7) How should equations be linearized (8) If Newton or quasi-Newton hnearization techniques are employed, how should the Jacobian be updated (9) Should corrections to unknowns that are computed at each iteration be modified to dampen or accelerate the solution or be kept within certain bounds (10) What convergence criterion should be applied ... [Pg.1286]

Both difficulties are eliminated by replacing V L by a positive-definite quasi-Newton (QN) approximation B, which is updated using only values of L and V L (See Section 6.4 for a discussion of QN updates.) Most SQP algorithms use Powell s modification (see Nash and Sofer, 1996) of the BFGS update. Hence the QP subproblem becomes... [Pg.304]

Some of the most important variations are the so-called Quasi-Newton Methods, which update the Hessian progressively and therefore economize compute requirements considerably. The most successful scheme for that purpose is the so-called BFGS update. For a detailed overview of the mathematical concepts, see [78, 79] an excellent account of optimization methods in chemistry can be found in [80]. [Pg.70]

We have referred to quasi-Newton methods rather than the quasi-Newton method because there are multiple definitions that can be used for the function F in this expression. The details of the function F are not central to our discussion, but you should note that this updating procedure now uses information from the current and the previous iterations of the method. This is different from all the methods we have introduced above, which only used information from the current iteration to generate a new iterate. If you think about this a little you will realize that the equations listed above only tell us how to proceed once several iterations of the method have already been made. Describing how to overcome this complication is beyond our scope here, but it does mean than when using a quasi-Newton method, the convergence of the method to a solution should really only be examined after performing a minimum of four or five iterations. [Pg.71]

The quasi-Newton methods estimate the matrix = H-1 by updating a previous guess of C in each iteration using only the gradient vector. These methods are very close to the quasi-Newton methods of solving a system of nonlinear equations. The order of convergence is between 1 and 2, and the minimum of a positive definite quadratic function is found in a finite number of steps. [Pg.113]

Based on the finite-difference formula Eq. (3.31) all Hessian updates Eire required to fulfill the quasi-Newton condition... [Pg.308]

There Eire other Hessian updates but for minimizations the BFGS update is the most successful. Hessism update techniques are usually combined with line search vide infra) and the resulting minimization algorithms are called quasi-Newton methods. In saddle point optimizations we must allow the approximate Hessian to become indefinite and the PSB update is therefore more appropriate. [Pg.309]

In quasi-Newton methods, a parametrized estimate of F or G is used initially, then updated at each iterative step. Saved values of q, p can be used in standard algorithms such as BFGS, described above. [Pg.30]

Another technique for handling the mass balances was introduced by Castillo and Grossmann (1). Rather than convert the objective function into an unconstrained form, they implemented the Variable Metric Projection method of Sargent and Murtagh (32) to minimize Gibbs s free energy. This is a quasi-Newton method which uses a rank-one update to the approximation of H l, with the search direction "projected" onto the intersection of hyperplanes defined by linear mass balances. [Pg.129]

The quasi-Newton methods. In the Newton-Raphson method, the Jacobian is filled and then solved to get a new set of independent variables in eveiy trial. The computer time consumed in doing this can be very high and increases dramatically with the number of stages and components. In quasi-Newton methods, recalculation of the Jacobian and its inverse or LU factors is avoided. Instead, these are updated using a formula based on the current values of the independent functions and variables. Broyden s (119) method for updating the Jacobian and its inverse is most commonly used. For LU factorization, Bennett s (120) method can be used to update the LU factors. The Bennett formula is... [Pg.160]

Get a new set of stripping factors by solving the energy balances and specification equations as the independent functions of the quasi-Newton technique of Broyden (Sec. 4.2.6). The derivatives of the Jacobian matrix are generated numerically and must include steps 4, 5, and 6 each time the independent variables are perturbed, The Jacobian is not recalculated after the first trial in the loop but is instead updated by Broyden s equation. [Pg.179]

Thus, in this case we see that each gradient difference yk provides information about one column of H. It is then reasonable to attempt to construct a family of successive approximation matrices BA so that, if H were a constant, the procedure would be consistent with Eq. [45]. Specifically, we require that the new update BA+ satisfy the quasi-Newton condition... [Pg.40]

J. Nocedal, Math. Comput., 35,773 (1980). Updating Quasi-Newton Matrices with Limited... [Pg.69]

A. R. Conn, N. I. M. Gould, and Ph. L. Toint, Math. Prog., 2, 177 (1991). Convergence of Quasi-Newton Matrices Generated by the Symmetric Rank One Update. [Pg.69]

Equation (26) is the "Quasi-Newton" condition it is fundamental to all the updating formula. There have been many updates proposed, and we briefly review some of the more important ones. The simplest are based on... [Pg.253]


See other pages where Quasi-Newton updates is mentioned: [Pg.305]    [Pg.203]    [Pg.50]    [Pg.337]    [Pg.449]    [Pg.18]    [Pg.2606]    [Pg.305]    [Pg.203]    [Pg.50]    [Pg.337]    [Pg.449]    [Pg.18]    [Pg.2606]    [Pg.2336]    [Pg.2349]    [Pg.486]    [Pg.153]    [Pg.63]    [Pg.210]    [Pg.71]    [Pg.205]    [Pg.69]    [Pg.238]    [Pg.138]    [Pg.46]    [Pg.271]    [Pg.313]    [Pg.139]    [Pg.613]    [Pg.204]    [Pg.1083]   
See also in sourсe #XX -- [ Pg.201 ]




SEARCH



Quasi-Newton

Quasi-Newton methods updating Hessian matrix

Update

© 2024 chempedia.info