Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The conjugate gradient method

The conjugate gradient method is based on the same ideas as the steepest descent, and the iteration process is very similar to the last one  [Pg.137]

However, the dirc ctions of ascent l(m j arc selected in a different way. On the first step we use the direction of the steepest ascent  [Pg.137]

On t he next stc]) the direction of ascent is a linear combination of the steepest ascent on this step and the direction of ascent l(m ) on the previous stei)  [Pg.137]

A linear line search in the conjugate gradient method [Pg.137]

IaiI us find t he minimum of the last functional with respect to k . T calculate now the first variation of f (A- )  [Pg.137]


HyperChem provides two versions of the conjugate gradient method, Fletcher-Reeves and Bolak-Rihiere. Polak-Ribiere is more refined and is the default eh oiee in HyperChem,... [Pg.59]

Tbis tecbnic ne is available only for the MM-i- force field. As is true for the conjugate gradient methods, yon should noi use this algorithm when the initial interatomic forces are very large (meaning, the molecular structure is far from a inmiimim). [Pg.60]

Several variants of the conjugate gradients method have been proposed. The formulatior given in Equation (5.7) is the original Fletcher-Reeves algorithm. Polak and Ribiere proposed an alternative form for the scalar constant 7) ... [Pg.285]

The conjugate gradient method [25] is used to minimize the function Minimization is done with respect to for a given value... [Pg.695]

The conjugate gradient method is one of the oldest in the Retcher-Reeves approach, the search direction is given by... [Pg.238]

There are different variants of the conjugate gradient method each of which corresponds to a different choice of the update parameter C - Some of these different methods and their convergence properties are discussed in Appendix D. The time has been discretized into N time steps (f, = / x 8f where i = 0,1, , N — 1) and the parameter space that is being searched in order to maximize the value of the objective functional is composed of the values of the electric field strength in each of the time intervals. [Pg.53]

In CED, a number of different iterative solvers for linear algebraic systems have been applied. Two of the most successful and most widely used methods are conjugate gradient and multigrid methods. The basic idea of the conjugate gradient method is to transform the linear equation system Eq. (38) into a minimization problem... [Pg.166]

In contrast to the conjugate gradient method, the multigrid method is rather a general framework for iterative solvers than a specific method. The multigrid method exploits the fact that the iteration error... [Pg.167]

Sparse matrices are ones in which the majority of the elements are zero. If the structure of the matrix is exploited, the solution time on a computer is greatly reduced. See Duff, I. S., J. K. Reid, and A. M. Erisman (eds.), Direct Methods for Sparse Matrices, Clarendon Press, Oxford (1986) Saad, Y., Iterative Methods for Sparse Linear Systems, 2d ed., Society for Industrial and Applied Mathematics, Philadelphia (2003). The conjugate gradient method is one method for solving sparse matrix problems, since it only involves multiplication of a matrix times a vector. Thus the sparseness of the matrix is easy to exploit. The conjugate gradient method is an iterative method that converges for sure in n iterations where the matrix is an n x n matrix. [Pg.42]

We can now qualitatively describe the conjugate-gradient method for minimizing a general function, E(x), where x is an /V-dimensional vector. We begin with an initial estimate, xo. Our first iterate is chosen to lie along the direction defined by do = V (xo), so xi = xo a0do- Unlike the simple example... [Pg.72]

In generating a third iterate for the conjugate-gradient method, we now estimate the search direction by VEfe) but insist that the search direction is orthogonal to both do and di. This idea is then repeated for subsequent iterations. [Pg.72]

A useful source for an in-depth discussion of the conjugate-gradient method is J. R. Shewchuk, An Introduction to the Conjugate Gradient Method Without the Agonizing Pain (http //www.cs.cmu.edu /—quake-papers / painless-conjugate-gradient.pdf). [Pg.81]

The conjugate gradient method was discovered independently by Hestenes and Stiefel at about the same time. It was named in a joint paper, Tlie original articles are ... [Pg.35]

While Huang and Ozisik solved the spacewise variation of wall heat flux for laminar forced convection problem, Silva Neto and Ozisik [57] used the conjugate gradient method and the adjoint equation simultaneously to solve for the timewise-varying strength of a two plane heat source. [Pg.75]

Kammerer WJ, Nashed MZ (1972) On the convergence of the conjugate gradient method for singular linear operator equations. SIAM J Numer Anal 9 165-181... [Pg.95]


See other pages where The conjugate gradient method is mentioned: [Pg.2337]    [Pg.2340]    [Pg.280]    [Pg.283]    [Pg.284]    [Pg.288]    [Pg.70]    [Pg.467]    [Pg.80]    [Pg.322]    [Pg.127]    [Pg.73]    [Pg.166]    [Pg.167]    [Pg.167]    [Pg.195]    [Pg.230]    [Pg.159]    [Pg.71]    [Pg.73]    [Pg.160]    [Pg.129]    [Pg.112]    [Pg.68]    [Pg.160]    [Pg.46]    [Pg.135]    [Pg.32]    [Pg.124]    [Pg.291]    [Pg.345]    [Pg.46]    [Pg.73]   


SEARCH



Conjugate gradient

Conjugate gradient methods

Conjugate method

Conjugation methods

Gradient method

Nonlinear least-squares inversion by the conjugate gradient method

The regularized conjugate gradient method

© 2024 chempedia.info