Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The method of steepest descent

A simple method, which has been used to arrive at the minimum sum of squares of a nonlinear model, is that of steepest descent. We know that the gradient of a scalar function is a vector that gives the direction of the greatest increase of the function at any point. In the steepest descent method, we take advantage of this property by moving in the opposite direction to reach a lower function value. Therefore, in this method, the initial vector of parameter estimates is corrected in the direction of the negative gradient of O  [Pg.489]

Where is a suitable constant factor and is the correction vector to be applied to the estimated value of b to obtain a new estimate of the parameter vector  [Pg.489]

Let us assume a complex function / (2) exists [see Eqs. (B.8)] such that [Pg.376]

With this function we now seek to evaluate the integral [cf. Eq. (2.22)] [Pg.376]

Moreover, suppose a point z = zq exists such that Re/ (2) assumes an extremum and hnf (2) = v (xq, yio) = cs const. Because of the definition of / (2) [see Eqs. (B.8)], this also implies that the integrand in Eq. (B.17) assumes a maximum at the point z = zq. The necessary condition for an extremum of Re/ (2) to exist may be stated more explicitly as [Pg.377]

C should pass through the saddle point such that u(xo,j/o) becomes [Pg.377]

These conditions cause C to be the path of steepest descent from the saddle point. To achieve this result we need to establish a relation between the real and imaginary parts of / (2). [Pg.377]

To evaluate the integral in Eq. (B.17) we now specify the path C according to two criteria, namely [Pg.377]


After substituting (2.3) and (2.4) for (2.1), the integral can be evaluated with the method of steepest descents. The stationary point E = E is given by the equation derived by Miller and George [1972]... [Pg.12]

In the light of the path-integral representation, the density matrix p Q-,Q-,p) may be semi-classically represented as oc exp[ —Si(Q )], where Si(Q ) is the Eucledian action on the -periodic trajectory that starts and ends at the point Q and visits the potential minimum Q = 0 for r = 0. The one-dimensional tunneling rate, in turn, is proportional to exp[ —S2(Q-)], where S2 is the action in the barrier for the closed straight trajectory which goes along the line with constant Q. The integral in (4.32) may be evaluated by the method of steepest descents, which leads to an optimum value of Q- = Q. This amounts to minimization of the total action Si -i- S2 over the positions of the bend point Q. ... [Pg.68]

Except for the nonlocal last term in the exponent, this expression is recognized as the average of the one-dimensional quantum partition function over the static configurations of the bath. This formula without the last term has been used by Dakhnovskii and Nefedova [1991] to handle a bath of classical anharmonic oscillators. The integral over q was evaluated with the method of steepest descents leading to the most favorable bath configuration. [Pg.78]

The functional B[(2(r)] actually depends only on the velocity dQ/dr at the moment when the non-adiabaticity region is crossed. If we take the path integral by the method of steepest descents, considering that the prefactor B[(2(r)] is much more weakly dependent on the realization of the path than Sad[Q(A]> we shall obtain the instanton trajectory for the adiabatic potential V a, then B[(2(t)] will have to be calculated for that trajectory. Since the instanton trajectory crosses the dividing surface twice, we finally have... [Pg.139]

The gradient can be used to optimize the weight vector according to the method of steepest-descent ... [Pg.8]

The choice ya = ra is the method of steepest descent. If the ya are taken to be the vectors et in rotation the method turns out to be the Gauss-Seidel iteration. If each ya is taken to be that e, for which e ra is greatest, the method is the method of relaxation (often attributed to Southwell but actually known to Gauss). An alternative choice is the et for which the reduction Eq. (2-10) in norm is greatest. [Pg.62]

Another commonly used method is the method of steepest descent. If A is any positive definite matrix, ordinarily the identity I, form... [Pg.86]

We confine ourselves here to the minimal residual method and the method of steepest descent relating to two-layer schemes. As usual, the explicit scheme is considered first ... [Pg.732]

The method of steepest descent. The explicit method of steepest descent is given by the formulas... [Pg.734]

If the matrix A is positive definite, i.e. it is symmetric and has positive eigenvalues, the solution of the linear equation system is equivalent to the minimization of the bilinear form given in Eq. (64). One of the best established methods for the solution of minimization problems is the method of steepest descent. The term steepest descent alludes to a picture where the cost function F is visualized as a land-... [Pg.166]

The method of steepest descent uses only first-order derivatives to determine the search direction. Alternatively, Newton s method for single-variable optimization can be adapted to carry out multivariable optimization, taking advantage of both first- and second-order derivatives to obtain better search directions1. However, second-order derivatives must be evaluated, either analytically or numerically, and multimodal functions can make the method unstable. Therefore, while this method is potentially very powerful, it also has some practical difficulties. [Pg.40]

An extension of the linearization technique discussed above may be used as a basis for design optimization. Such an application to natural gas pipeline systems was reported by Flanigan (F4) using the so-called constrained derivatives (W4) and the method of steepest descent. We offer a more concise derivation of this method following a development by Bryson and Ho (B14). [Pg.174]

Applying the method of steepest descent (or saddle-point method) to Eq. (3.49) yields... [Pg.31]

The search for the optimum usually starts from the coordinates in the plane of the first two eigenvectors. However, to avoid the iteration (usually done with the method of steepest descent) stopping at a relative minimum, it is advisable to repeat the search from a different starting position, such as that given by the coordinates of two original variables. [Pg.104]

The simplex procedure has been tested on difficult surfaces with spiral valleys and found superior to the older established theories such as the method of steepest descents. If however the surface is well behaved or if by an intelligent guess one can get close to the minimum then the older methods can be efficient. [Pg.106]

In the method of steepest descents one calculates the gradient at a point. The method of attack depends on whether this gradient may be calculated analytically or numerically (which requires calculations at N + 1 points for an TV dimensional surface) and one moves along this direction until the lowest point is reached when a new gradient is calculated. When one is close to the minimum and the gradient is small it is necessary to have a method which is quadratically convergent, and to calculate the general quadratic function for N dimensions numerically requires N -t 1)(A+ 2)/2 function evaluations. [Pg.106]

Evaluating Eq. (51) by the method of steepest descents for large R, 8, n, we find that the saddle point n is given by... [Pg.253]

The inversion of this transform gives a somewhat cumbersome integral, of which the physical meaning is far from obvious, and Lighthill Whitham naturally prefer to elucidate this form the asymptotic behaviour of the transform, by the method of steepest descents. The method presented here also uses the transform without the need for inversion and obtains a description of the wave in terms of its moments. [Pg.138]

This ultrasimple classical theory is, of course, too crude for practical applications, especially for highly excited states of the parent molecule. Its usefulness gradually diminishes as the degree of vibrational excitation increases, i.e., as the initial wavefunction becomes more and more oscillatory. If both wavefunctions oscillate rapidly, they can be approximated by semiclassical WKB wavefunctions and the radial overlap integral of the bound and the continuum wavefunctions can subsequently be evaluated by the method of steepest descent. This leads to analytical expressions for the spectrum (Child 1980, 1991 ch.5 Tellinghuisen 1985, 1987). In particular, relation (13.2), which relates the coordinate R to the energy E, is replaced by... [Pg.316]

Using the method of steepest-descent one can show that approximately... [Pg.193]

It should be noted that the expressions for IC and ISC cases Eqs. (71) and (72) are quite similar except for the electronic matrix elements and energy gaps. Although the Fourier integral involved in Wl fb given above can easily be carried out numerically, analytical expressions are often desired for this purpose, the method of steepest-descent [45-51] (saddle-point method) is commonly used. Take Eq. (73) as an example. Wl b will first be written as... [Pg.196]

For arbitrary potentials, given the low frequencies and high intensities employed in current experiments, for the numerical evaluation of the amplitude (4.1) in the form (4.4) the method of steepest descent [also known as the saddle-point approximation (SPA)] is the method of choice. Thus, we must determine the values of fc, //, and t for which the action Sp(t,t, k) is stationary, so that its partial derivatives with respect to these variables vanish. This condition gives the equations... [Pg.69]

In the one-dimensional search methods there are two principle variations some methods employ only first derivatives of the given function (the gradient methods), whereas others (Newton s method and its variants) require explicit knowledge of the second derivatives. The methods in this last category have so far found very limited use in quantum chemistry, so that we shall refer to them only briefly at the end of this section, and concentrate on the gradient methods. The oldest of these is the method of steepest descent. [Pg.43]

In order to find the shape of the wave front at large values of x and t, one may perform an asymptotic expansion of the integral in equation (119) for t and x approaching infinity with the ratio x /t fixed. By means of an interesting application of the method of steepest descents, the reader may show that... [Pg.125]


See other pages where The method of steepest descent is mentioned: [Pg.182]    [Pg.88]    [Pg.86]    [Pg.732]    [Pg.40]    [Pg.174]    [Pg.178]    [Pg.32]    [Pg.32]    [Pg.32]    [Pg.186]    [Pg.220]    [Pg.292]    [Pg.293]    [Pg.158]    [Pg.129]    [Pg.133]    [Pg.732]    [Pg.502]    [Pg.16]    [Pg.144]    [Pg.145]    [Pg.404]   


SEARCH



Steepest descent

Steepest descent method methods

Steepest descents method

© 2024 chempedia.info