Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Conjugate-Gradient Methods

Modified Newton methods require calculation of second derivatives. There might be cases where these derivatives are not available analytically. One may then calculate them by finite differences (Edgar and Himmelblau, 1988 Gill et al. 1981 Press et al. 1992). The latter, however, requires a considerable number of [Pg.76]

Conjugate gradient-type methods form a class of minimization procedures that accomplish two objectives  [Pg.77]

these methods are suitable for problems with a very large number of parameters. They are essential in circumstances when methods based on matrix factorization are not viable because the relevant matrix is too large or too dense (Gill etal. 1981). [Pg.77]

Two versions of the method have been formulated (Scales, 1986)  [Pg.77]

Scales (1986) recommends the Polak Ribiere version because it has slightly better convergence properties. Scales also gives an algorithm which is used for both methods that differ only in the formula for the updating of the search vector. [Pg.77]

In the steepest-descent method the gradient is calculated after each iteration. Therefore, depending on the surface, the search direction can change at each step if the molecule is moved through the minimum at each step. This can be avoided if the history of gradients is stored and used to modify subsequent steps. In a [Pg.64]

The computation involved in each cycle is more complex and time consuming than for the steepest-descent method but convergence is generally more rapid. Two commonly used examples are the Fletcher-Reeves and the Polak-Ribiere methods1-175 178]. [Pg.65]

There are several ways of choosing the value. Some of the names associated with these methods are Fletcher-Reeves (FR), Polak-Ribiere (PR) and Hestenes-Stiefel (HS). Their definitions of P are given in eq. (12.8). [Pg.385]

The Polak-Ribiere prescription is usually preferred in practice. Conjugate gradient methods have much better convergence characteristics than the steepest descent, but they are again only able to locate minima. They do require slightly more storage than the steepest descent, since the previous gradient also must be saved. [Pg.318]

The main problem with the steepest descent method is the partial undoing of the [Pg.318]


Davis, M. E., McCammon, J. A. Solving the finite difference linearized Poisson-Boltzmann equation A comparison of relaxation and conjugate gradients methods.. J. Comp. Chem. 10 (1989) 386-394. [Pg.195]

A con jugate gradicri I method differs from the steepest descent technique by using both the current gradient and the previous search direction to drive the rn in im i/ation. , A conjugate gradient method is a first order in in im i/er. [Pg.59]

HyperChem provides two versions of the conjugate gradient method, Fletcher-Reeves and Bolak-Rihiere. Polak-Ribiere is more refined and is the default eh oiee in HyperChem,... [Pg.59]

Tbis tecbnic ne is available only for the MM-i- force field. As is true for the conjugate gradient methods, yon should noi use this algorithm when the initial interatomic forces are very large (meaning, the molecular structure is far from a inmiimim). [Pg.60]

Several variants of the conjugate gradients method have been proposed. The formulatior given in Equation (5.7) is the original Fletcher-Reeves algorithm. Polak and Ribiere proposed an alternative form for the scalar constant 7) ... [Pg.285]

Tafe/e 5.1 A comparison of the steepest descents and conjugate gradients methods for an initial refinement and a stringent minimisation. [Pg.289]

Note Because of its neglect of off-diagonal blocks, this optimizer can sometimes oscillate and fail to converge. In this case, use a conjugate gradient method. [Pg.60]

Conjugate Gradient methods compute the conjugate directions hj by iterative computation involving the gradient gj without... [Pg.305]

The procedure uses second derivative information and can be quite efficient compared to conjugate gradient methods. However, th e neglect of couplin g in th e Hessian m atrix can lead to situation s where oscillation is possible. Conjugate gradient methods. [Pg.306]

Hestenes, M. R. Conjugate Gradient Methods in Optimization, Springer-Verlag (1980). [Pg.422]

The conjugate gradient method [25] is used to minimize the function Minimization is done with respect to for a given value... [Pg.695]

The conjugate gradient method is one of the oldest in the Retcher-Reeves approach, the search direction is given by... [Pg.238]


See other pages where Conjugate-Gradient Methods is mentioned: [Pg.2337]    [Pg.2340]    [Pg.314]    [Pg.122]    [Pg.122]    [Pg.304]    [Pg.304]    [Pg.280]    [Pg.282]    [Pg.283]    [Pg.284]    [Pg.284]    [Pg.284]    [Pg.284]    [Pg.284]    [Pg.288]    [Pg.288]    [Pg.51]    [Pg.70]    [Pg.122]    [Pg.122]    [Pg.304]    [Pg.304]    [Pg.304]    [Pg.101]    [Pg.467]    [Pg.480]    [Pg.80]    [Pg.81]    [Pg.318]    [Pg.319]    [Pg.322]    [Pg.331]   
See also in sourсe #XX -- [ Pg.690 ]

See also in sourсe #XX -- [ Pg.71 , Pg.81 ]




SEARCH



Algorithms conjugate gradient method

Conjugate Gradient method linear algebraic systems

Conjugate gradient

Conjugate gradient method regularized

Conjugate gradient method, optimal control

Conjugate gradient method, optimal control theory

Conjugate gradient search method

Conjugate method

Conjugate-gradient density-matrix-search method

Conjugation methods

Gradient method

Iterative linear solvers Conjugate Gradient method

Molecular function conjugate gradient methods

Molecular mechanics conjugate gradient methods

Nonlinear least-squares inversion by the conjugate gradient method

Optimization conjugate gradient method

Optimization techniques conjugate gradient methods

Penalized Armijo conjugate gradient method

Preconditioned conjugate gradient method

Quadratic conjugate gradient method

Sensitivity conjugate gradients method

The conjugate gradient method

The regularized conjugate gradient method

© 2024 chempedia.info