Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Direction descent

The reaction path is defined by Fukui [83] as the line q(.s) leading down from a transition state along the steepest descent direction... [Pg.2353]

A starting poin t is defined an d th e in itial conjugate direction is chosen to be the steepest descent direction, h[, = g... [Pg.305]

The conjugate direction is reset to the steepest descent direction every 3N search direction s or cycles, or if the en ergy rises between cycles. [Pg.305]

For quadratic functions this is identical to the Fletcher-Reeves formula but there is some evidence that the Polak-Ribiere may be somewhat superior to the Fletcher-Reeves procedure for non-quadratic functions. It is not reset to the steepest descent direction unless the energy has risen between cycles. [Pg.306]

Such a direction s is called a descent direction and satisfies the following requirement at any point... [Pg.189]

All line searches start by defining a descent direction. Consider sill vectors z which fulfill the condition... [Pg.311]

Line searches are often used in connection with Hessian update formulas and provide a relatively stable and efficient method for minimizations. However, line searches are not always successful. For example, if the Hessian is indefinite there is no natural way to choose the descent direction. We may then have to revert to steepest descent although this step makes no use of the information provided by the Hessian. It may also be... [Pg.312]

The direction given by —H(0s) lVU 0s) is a descent direction only when the Hessian matrix is positive definite. For this reason, the Newton-Raphson algorithm is less robust than the steepest descent method hence, it does not guarantee the convergence toward a local minimum. On the other hand, when the Hessian matrix is positive definite, and in particular in a neighborhood of the minimum, the algorithm converges much faster than the first-order methods. [Pg.52]

For years everyone has been content with algorithms which produce a descent path to a stationary point, which can of course be a saddle-point rather than the desired local minimum. However McCormick 27 has put forward an idea, later developed by Mord and Sorensen 28, for the use of directions of negative curvature coupled with descent directions to ensure convergence to a local minimum. [Pg.45]

The gradient methods, like those of Newton, Gauss-Newton, Fletcher, and Levenberg-Marquardt, use the derivative vector of the SSR with respect to the parameter directions to determine the direction where this gradient changes most, the steepest-descent direction. [Pg.316]

It is reasonable to choose a search vector p that will be a descent direction, that is, a direction leading to function reduction. A descent direction p is defined as one along which the directional derivative is negative ... [Pg.21]

The trust radius in the trust region approach is estimated on the basis of the local Hessian s characteristics (positive-definite, positive-semidefinite, indefinite). The basic idea is to choose s nearly in the current negative gradient direction (—gk) when the trust radius is small, and approach the Newton step -Hk x%k as the trust region is increased. (Hk and g denote the Hessian and gradient, respectively, at xk). Note from condition [12] that these two choices correspond to the extremal cases (M = / and M = H) of general descent directions of form p = — M 1g, where M is a positive-definite approximation to the Hessian. [Pg.22]

Although line searches are typically easier to program, trust region methods may be effective when the procedure for determining the search direction p is not necessarily one of descent. This may be the case for methods that use finite-difference approximations to the Hessian in the procedure for specifying p (discussed in later sections). As we shall see later, in BFGS quasi-Newton or truncated Newton methods line searches may be preferable because descent directions are guaranteed. [Pg.22]

At each iteration of SD, the search direction is taken as -g, the negative gradient of the objective function at xk. Recall that a descent direction p satisfies g[pk < 0. The simplest way to guarantee the negativity of this inner product is to choose p = — gt. This choice also minimizes the inner product -glp f°r unit-length vectors and thus gives rise to the name steepest descent. [Pg.30]

First, when Hk is not positive-definite, the search direction may not exist or may not be a descent direction. Strategies to produce a related positive-definite matrix Hk, or alternative search directions, become necessary. Second, far away from x, the quadratic approximation of expression [34] may be poor, and the Newton direction must be adjusted. A line search, for example, can dampen (scale) the Newton direction when it exists, ensuring sufficient decrease and guaranteeing uniform progress toward a solution. These adjustments lead to the following modified Newton framework (using a line search). [Pg.37]

Truncated Newton methods were introduced in the early 1980s111-114 and have been gaining popularity ever since.82-109 110 115-123 Their basis is the following simple observation. An exact solution of the Newton equation at every step is unnecessary and computationally wasteful in the framework of a basic descent method. That is, an exact Newton search direction is unwarranted when the objective function is not well approximated by a convex quadratic and/or the initial point is distant from a solution. Any descent direction will suffice in that case. As a solution to the minimization problem is approached, the quadratic approximation may become more accurate, and more effort in solution of the Newton equation may be warranted. [Pg.43]

Note that in case of negative curvature, p is set in step 1 to -M g if this occurs at the first PCG iteration, or to the current iterate for pfc thereafter. These choices are guaranteed directions of descent.114 Alternate descent directions can also be used, such as -g or d but the default choices above have been found to be satisfactory in practice. [Pg.47]

The steepest descent direction, that is, the negative gradient of the Born-Oppenheimer potential energy hypersurfacecan be determined from the forces -dE/dR acting on nuclei a of coordinates X, Y, and Z. In some applications, the equivalent representation as the Hellmann-Feynman force (F > is more advantageous. [Pg.217]

Figure 5-3 The top part of the figure shows the isolines of the misfit functional map and the steepest descent path of the iterative solutions in the space of model parameters. The bottom part presents a magnified element of this map with just one iteration step shown, from iteration (n. — 1) to iteration number ti. According to the line search principle, the direction of the steepest ascent at iteration number n must be perpendicular to the misfit isoline at the minimum point along the previous direction of the steepest descent. Therefore, many steps may be required to reach the global minimum, because every subsequent steepest descent direction is perpendicular to the previous one, similar to the path of experienced slalom skiers. Figure 5-3 The top part of the figure shows the isolines of the misfit functional map and the steepest descent path of the iterative solutions in the space of model parameters. The bottom part presents a magnified element of this map with just one iteration step shown, from iteration (n. — 1) to iteration number ti. According to the line search principle, the direction of the steepest ascent at iteration number n must be perpendicular to the misfit isoline at the minimum point along the previous direction of the steepest descent. Therefore, many steps may be required to reach the global minimum, because every subsequent steepest descent direction is perpendicular to the previous one, similar to the path of experienced slalom skiers.
Equation (5.69) follows from (5.62), and equation (5.70) holds because in the previous step we moved along the search line in the direction I i to the minimum, so the steepest descent direction 1 at the minimum point will be perpendicular to l i. Also, it can be shown that... [Pg.142]

This method uses the same ideas as the conventional conjugate gradient method. However, the iteration process is based on the calculation of the regularized steepest descent directions ... [Pg.148]

On the other hand, conjugate gradient methods. are more effective in locating the minimum energy structure. In this approach previous optimization information is utilized. The second and all subsequent descent directions are linear combinations of the previous direction and the current negative gradient of the potential... [Pg.723]


See other pages where Direction descent is mentioned: [Pg.304]    [Pg.304]    [Pg.306]    [Pg.304]    [Pg.304]    [Pg.156]    [Pg.192]    [Pg.202]    [Pg.312]    [Pg.68]    [Pg.110]    [Pg.292]    [Pg.49]    [Pg.51]    [Pg.52]    [Pg.52]    [Pg.21]    [Pg.22]    [Pg.35]    [Pg.37]    [Pg.46]    [Pg.124]    [Pg.125]    [Pg.139]   
See also in sourсe #XX -- [ Pg.21 , Pg.30 ]




SEARCH



© 2024 chempedia.info