Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quasi-Newton-Raphson

Anglada, J. M., 8c Bofill, J. M. (1997). A reduced-restricted-quasi-Newton-Raphson method for locating and optimizing energy crossing points between two potential energy surfaces. Journal of Computational Chemistry, 18(S), 992-1003. [Pg.1399]

A variation of QST, called synchronous tran.sit-guided quasi-Newton (STQN), employs a circle arc instead of a parabola for the inteqrolation, and uses the tangent to the circle for guiding the search toward the TS region. Once the TS region is located, the optimization is switched to a quasi-Newton-Raphson search (see Section 3.2). [Pg.3115]

Quasi-Newton-Raphson Method for Locating and Optimizing Energy Crossing Points between Two Potential Energy Surfaces. [Pg.121]

Note that, by using the quasi-linearization method, the solution of a non-linear problem can be reduced to solution of a succession of linear problems. The method is a further development of the Newton-Raphson method (Dulnev and Ushakovskaya, 1988) and its generalized version. [Pg.306]

The quasi-Newton methods. In the Newton-Raphson method, the Jacobian is filled and then solved to get a new set of independent variables in eveiy trial. The computer time consumed in doing this can be very high and increases dramatically with the number of stages and components. In quasi-Newton methods, recalculation of the Jacobian and its inverse or LU factors is avoided. Instead, these are updated using a formula based on the current values of the independent functions and variables. Broyden s (119) method for updating the Jacobian and its inverse is most commonly used. For LU factorization, Bennett s (120) method can be used to update the LU factors. The Bennett formula is... [Pg.160]

The quasi-Newton methods reduce the computer time spent per trial by reevaluating the Jacobian but at the expense of a greater number of trials. The total computer time sometimes exceeds that of the conventional Newton-Raphson method. The quasi-Newton methods also are more sensitive to how close the initial values are to a final solution and tend to fail more readily. They may have to be avoided... [Pg.160]

There are a number of variations on the Newton-Raphson method, many of which aim to eliminate the need to calculate the full matrix of second derivatives. In addition, a family of methods called the quasi-Newton methods require only first derivatives and gradually construct the inverse Hessian matrix as the calculation proceeds. One simple way in which it may be possible to speed up the Newton-Raphson method is to use the same Hessian matrix for several successive steps of the Newton-Raphson algorithm with only the gradients being recalculated at each iteration. [Pg.268]

Calculation of the inverse Hessian matrix can be a potentially time-consuming operation that represents a significant drawback to the pure second derivative methods such as Newton-Raphson. Moreover, one may not be able to calculate analytical second derivatives, which are preferable. The quasi-Newton methods (also known as variable metric methods) gradually build up the inverse Hessian matrix in successive iterations. That is, a sequence of... [Pg.268]

An alternative when the size of the molecule prevents use of the quasi-Newton or Newton-Raphson methods is to use an optimization method that uses only the gradient and not the Hessian. Two such methods are the steepest-descent method and the conjugate-gradient method. [Pg.538]

In the previous subsection, the successive substitution and Wegstein methods were introduced as the two methods most commonly implemented in recycle convergence units. Other methods, such as the Newton-Raphson method, Broyden s quasi-Newton method, and the dominant-eigenvalue method, are candidates as well, especially when the equations being solved are highly nonlinear and interdependent. In this subsection, the principal features of all five methods are compared. [Pg.133]

Alternatively, so-called secant methods can be used to approximate the Jacobian matrix with far less effort (Westerberg et al., 1979). These provide a superlinear rate of convergence that is, they reduce the errors less rapidly than the Newton-Raphson method, but more rapidly than the method of successive substitutions, which has a linear rate of convergence (i.e., the length of the error vector is reduced from 0.1, 0.01, 10 , 10 , 10 , ...). These methods are also referred to as quasi-Newton methods, with Broyden s method being the most popular. [Pg.134]

To compare the method of successive substitutions with the Newton-Raphson method, or the quasi-Newton methods, the former can be written ... [Pg.134]


See other pages where Quasi-Newton-Raphson is mentioned: [Pg.328]    [Pg.172]    [Pg.328]    [Pg.532]    [Pg.396]    [Pg.160]    [Pg.484]    [Pg.160]    [Pg.234]    [Pg.172]    [Pg.328]    [Pg.172]    [Pg.328]    [Pg.532]    [Pg.396]    [Pg.160]    [Pg.484]    [Pg.160]    [Pg.234]    [Pg.172]    [Pg.286]    [Pg.286]    [Pg.152]    [Pg.153]    [Pg.63]    [Pg.23]    [Pg.35]    [Pg.613]    [Pg.1063]    [Pg.152]    [Pg.153]    [Pg.625]    [Pg.479]    [Pg.537]    [Pg.539]    [Pg.126]    [Pg.65]    [Pg.489]    [Pg.490]    [Pg.6]    [Pg.322]    [Pg.55]    [Pg.18]   
See also in sourсe #XX -- [ Pg.160 ]




SEARCH



Newton-Raphson

Quasi-Newton

Raphson

© 2024 chempedia.info