Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Descent method

Descent methods are specific (quasi-)Newton methods which look for minimizers only. They differ from the general (quasi-)Newton methods in the line search step which is added to ensure that the procedure makes a sufficient progress in the direction to a minimizer, particularly in the case when the initial guess is far away from a solution. Line search means that at a point x the energy functional E is minimized along the (quasi-)Newton vector p, i.e. a positive value is determined such that [Pg.66]

The feasibility of a line search step is guaranteed by Lemma 6.  [Pg.66]

A vector p satisfying the condition (48) is called a descent vector at X. By the above lemma, each vector pew forming an angle of less than 90 with the steepest descent vector (negative gradient) is a descent vector. So the steepest descent vector (p = -g(x)) can be regarded as the limit case. [Pg.66]

A vector p lR is a descent vector at x if and only if there is a positive definite matrix M such that [Pg.67]

The first condition is to ensure that by means of a certain energy decrease is attained. But u may be very small such that the progress in the direction to a minimizer can be insignificant. To avoid such a situation the second condition is taken. [Pg.67]


If there is no approximate Hessian available, then the unit matrix is frequently used, i.e., a step is made along the gradient. This is the steepest descent method. The unit matrix is arbitrary and has no invariance properties, and thus the... [Pg.2335]

An alternative, and closely related, approach is the augmented Hessian method [25]. The basic idea is to interpolate between the steepest descent method far from the minimum, and the Newton-Raphson method close to the minimum. This is done by adding to the Hessian a constant shift matrix which depends on the magnitude of the gradient. Far from the solution the gradient is large and, consequently, so is the shift d. One... [Pg.2339]

Fletcher R and Powell M D 1963 A rapidly convergent descent method for minimization Comput. J. 6 163... [Pg.2356]

Example Com pare the steps of a conjugate gradien t min im i/ation with the steepest descent method.. A molecular system can reach a potential minimum after the second step if the first step proceeds from, A to B. If the first step is too large, placing the system at D, the second step still places the system near the tninimum(K ) because the optim i/,er remembers the penultimate step. [Pg.59]

This study shows that the steepest descent method can actually be superior to conjugate gradients when the starting structure is some way from the minimum. However, conjugate gradients is much better once the initial strain has been removed. [Pg.289]

The steepest descent method is a first order minimizer. It uses the first derivative of the potential energy with respect to the Cartesian coordinates. The method moves down the steepest slope of the interatomic forces on the potential energy surface. The descent is accomplished by adding an increment to the coordinates in the direction of the negative gradient of the potential energy, or the force. [Pg.58]

The steepest descent method rapidly alleviates large forces on atoms. This is especially useful for eliminating the large non-bonded interactions often found in initial structures. Each step in... [Pg.58]

The steepest descent method is quite old and utilizes the intuitive concept of moving in the direction where the objective function changes the most. However, it is clearly not as efficient as the other three. Conjugate gradient utilizes only first-derivative information, as does steepest descent, but generates improved search directions. Newton s method requires second derivative information but is veiy efficient, while quasi-Newton retains most of the benefits of Newton s method but utilizes only first derivative information. All of these techniques are also used with constrained optimization. [Pg.744]

The moderator material and optimum sizes, the extraction channel configuration, as well as the converter and reflector material and sizes, were determined using the MCNP-4B code and the steepest descent method to attain a maximum flux density of thenual neutrons at the position of an object to be studied. The calculated data were experimentally verified, which showed good agreement. [Pg.435]

For a given Hamiltonian the calculation of the partition function can be done exactly in only few cases (some of them will be presented below). In general the calculation requires a scheme of approximations. Mean-field approximation (MFA) is a very popular approximation based on the steepest descent method [17,22]. In this case it is assumed that the main contribution to Z is due to fields which are localized in a small region of the functional space. More crudely, for each kind of particle only one field is... [Pg.807]

The most frequently used methods fall between the Newton method and the steepest descents method. These methods avoid direct calculation of the Hessian (the matrix of second derivatives) instead they start with an approximate Hessian and update it at every iteration. [Pg.238]

McWeeny, R., Proc. Roy. Soc. [London) A235, 496, (i) The density matrix in self-consistent field theory. I. Iterative construction of the density matrix." Beryllium atom is studied. Steepest descent method is described. [Pg.349]

Steadiness of vacuum and one-particle states, 657 Steck B.,5Z8 Steck operator, 538 Steepest descents, method of, 62 Stochastic processes, 102,269 Strangeness quantum number, 516 Strategic saddle point, 309 Strategy, 308 mixed, 309... [Pg.784]

It is instructive to note that both the steepest-descent and the Newton-Raphson methods lead in the direction of —VU however, the steepest-descent method is unable to tell us how far to go in each step and therefore we have to search for the minimum in a very ineffective way (see Fig. 4.3). [Pg.114]

Schwartz, A., Polak, E., 1997, Family of projected descent methods for optimization problems with simple bounds. Journal of Optimization Theory and Applications 92, 1... [Pg.421]

The integral in Eq. (40) will be taken by the steepest descent method (SDM). The reason why we do not apply an analogous technique directly to the 5-function in Eq. (39) is not only because we want to get rid of the e integration, but also because the SDM proved more forgiving in terms of accuracy when used to approximate the 0-function, rather than the 5-function. [Pg.154]


See other pages where Descent method is mentioned: [Pg.464]    [Pg.2335]    [Pg.2335]    [Pg.314]    [Pg.58]    [Pg.122]    [Pg.280]    [Pg.280]    [Pg.282]    [Pg.282]    [Pg.282]    [Pg.283]    [Pg.159]    [Pg.122]    [Pg.122]    [Pg.304]    [Pg.304]    [Pg.80]    [Pg.82]    [Pg.317]    [Pg.318]    [Pg.318]    [Pg.544]    [Pg.65]    [Pg.113]    [Pg.113]    [Pg.114]    [Pg.128]    [Pg.130]    [Pg.235]    [Pg.241]    [Pg.272]    [Pg.541]   
See also in sourсe #XX -- [ Pg.18 ]




SEARCH



© 2024 chempedia.info