Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quadratic steepest descent

Sun J-Q and Ruedenberg K 1993 Quadratic steepest descent on potential energy surfaces. I. Basic formalism and quantitative assessment J. Chem. Phys. 99 5257... [Pg.2359]

Eckert F, Werner HJ (1998) Reaction path following by quadratic steepest descent. Theor Chem Acc... [Pg.526]

As expected, both steps are opposite the gradient for small t. Thus, the level-shifted Newton method and quadratic steepest descent differ only in the interpolation between small steps (which are opposite the gradient) and large steps (which are equivalent to the Newton step). [Pg.122]

Figure B3.5.1. Contour line representation of a quadratic surface and part of a steepest descent path zigzagging toward the minimum. Figure B3.5.1. Contour line representation of a quadratic surface and part of a steepest descent path zigzagging toward the minimum.
For quadratic functions this is identical to the Fletcher-Reeves formula but there is some evidence that the Polak-Ribiere may be somewhat superior to the Fletcher-Reeves procedure for non-quadratic functions. It is not reset to the steepest descent direction unless the energy has risen between cycles. [Pg.306]

Steepest descent method for a general quadratic function. [Pg.192]

From one viewpoint the search direction of steepest descent can be interpreted as being orthogonal to a linear approximation (tangent to) of the objective function at point x examine Figure 6.9a. Now suppose we make a quadratic approximation of/(x) at x ... [Pg.197]

In the method of steepest descents one calculates the gradient at a point. The method of attack depends on whether this gradient may be calculated analytically or numerically (which requires calculations at N + 1 points for an TV dimensional surface) and one moves along this direction until the lowest point is reached when a new gradient is calculated. When one is close to the minimum and the gradient is small it is necessary to have a method which is quadratically convergent, and to calculate the general quadratic function for N dimensions numerically requires N -t 1)(A+ 2)/2 function evaluations. [Pg.106]

Quadratic convergence means that eventually the number of correct figures in Xc doubles at each step, clearly a desirable property. Close to x Newton s method Eq. (3.9) shows quadratic convergence while quasi-Newton methods Eq. (3.8) show superlinear convergence. The RF step Eq. (3.20) converges quadratically when the exact Hessian is used. Steepest descent with exact line search converges linearly for minimization. [Pg.310]

The uncertainty principle requires that any extremum path should be spread, and the next step in our calculation is to find the prefactor in (3.54) by incorporating small fluctuations around the instanton solution, in the spirit of the usual steepest descent method. Following Callan and Coleman [1977], let us represent an arbitrary path in the form x t) = xins(T) + Sx(t) and expand the action functional up to quadratic terms in S (t), assuming that these deviations are small ... [Pg.70]

The methods differ in the determination of the step length factor ak at the Ath iteration, since the direction of the steepest descent is, due to nonlinearities, not necessarily the optimal one, but only for quadratic dependencies. Some methods therefore use the second derivative matrix of the objective function with respect to the parameters, the Hessian matrix, to determine the parameter improvement step-length and its direction ... [Pg.316]

Steepest descent is simple to implement and requires modest storage, O(k) however, progress toward a minimum may be very slow, especially near a solution. The convergence rate of SD when applied to a convex quadratic function, as in Eq. [22], is only linear. The associated convergence ratio is no greater than [(k - 1)/(k + l)]4 where k, the condition number, is the ratio of largest to smallest eigenvalues of A ... [Pg.30]

Figure 12 Steepest descent and conjugate gradient quantities that affect the convergence rate for quadratic functions (see text for the distinct context of these functions). Figure 12 Steepest descent and conjugate gradient quantities that affect the convergence rate for quadratic functions (see text for the distinct context of these functions).
In the case of a nonlinear operator, 4. it is preferable to use an algorithm of the steepest descent method with the quadratic line search. It can bo summarized as follows ... [Pg.131]

The most common assumption is one of a reaction path in hyperspace (Miller et al. 1980). A saddle point on the PES is found and the steepest descent path (in mass-weighted coordinates) from this saddle point to reactants and products is defined as the reaction path. The information needed, except for the path and the energies along it, is the local quadratic PES for motion perpendicular to the path. The reaction-path Hamiltonian is only a weakly local method since it can be viewed as an approximation to the full PES and since it is possible to use any of the previously defined global-dynamical methods with this potential. However, it is local because the approximate PES restricts motion to lie around the reaction path. The utility of a reaction-path formalism involves convenient approximations to the dynamics which can be made with the formalism as a starting point. [Pg.211]

A great deal of effort has been expended on efficient calculation of the MEP [12,30-32, 41-50]. At the moment, if relatively inexpensive second derivatives are available, the cubic corrected local quadratic method is most efficient otherwise, reasonably efficient gradient-only methods are available [49]. This is an area where methods are still advancing and recently proposed methods may prove to be better still when these methods have been used with dynamical calculations on real ab initio PES [12,43,48, 50]. At this stage some examples of steepest descent reaction paths might be informative. [Pg.401]

The iterative solution can be carried out by one of various algorithms, for example Newton s approximation to find roots, steepest descent to find a minimum quadratic error, rough search, successive substitution. Newton s method in four dimensions works reasonably well, although instability can set in if the incremental changes are allowed to be too large. Hence some deceleration is required to stabilise the algorithm. The method of successive substitution is more efficient, but also... [Pg.117]


See other pages where Quadratic steepest descent is mentioned: [Pg.121]    [Pg.121]    [Pg.121]    [Pg.121]    [Pg.2335]    [Pg.306]    [Pg.45]    [Pg.74]    [Pg.318]    [Pg.321]    [Pg.194]    [Pg.195]    [Pg.207]    [Pg.70]    [Pg.68]    [Pg.158]    [Pg.110]    [Pg.59]    [Pg.35]    [Pg.253]    [Pg.167]    [Pg.543]    [Pg.723]    [Pg.318]    [Pg.83]    [Pg.56]    [Pg.104]    [Pg.196]    [Pg.2335]    [Pg.114]    [Pg.114]   
See also in sourсe #XX -- [ Pg.121 ]




SEARCH



Quadratic

Steepest descent

© 2024 chempedia.info