Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter gradient methods

The gradient of the PES (force) can in principle be calculated by finite difference methods. This is, however, extremely inefficient, requiring many evaluations of the wave function. Gradient methods in quantum chemistiy are fortunately now very advanced, and analytic gradients are available for a wide variety of ab initio methods [123-127]. Note that if the wave function depends on a set of parameters X], for example, the expansion coefficients of the basis functions used to build the orbitals in molecular orbital (MO) theory. [Pg.267]

To determine the optimal parameters, traditional methods, such as conjugate gradient and simplex are often not adequate, because they tend to get trapped in local minima. To overcome this difficulty, higher-order methods, such as the genetic algorithm (GA) can be employed [31,32]. The GA is a general purpose functional minimization procedure that requires as input an evaluation, or test function to express how well a particular laser pulse achieves the target. Tests have shown that several thousand evaluations of the test function may be required to determine the parameters of the optimal fields [17]. This presents no difficulty in the simple, pure-state model discussed above. [Pg.253]

There are different variants of the conjugate gradient method each of which corresponds to a different choice of the update parameter C - Some of these different methods and their convergence properties are discussed in Appendix D. The time has been discretized into N time steps (f, = / x 8f where i = 0,1, , N — 1) and the parameter space that is being searched in order to maximize the value of the objective functional is composed of the values of the electric field strength in each of the time intervals. [Pg.53]

Instead of using repeated solution of a suitable eigenvalue equation to optimize the orbitals, as in conventional forms of SCF theory, we have found it more convenient to optimize by a gradient method based on direct evaluation of the ener functional (4), ortho normalization being restored after every parameter variation. Although many iterations are required, the energy evaluation is extremely rapid, the process is very stable, and any constraints on the parameters (e.g. due to spatial symmetry or choice of some type of localization) are very easily imposed. It is also a simple matter to optimize with respect to non-linear parameters such as orbital exponents. [Pg.167]

Basically two search procedures for non-linear parameter estimation applications apply. (Nash and Walker-Smith, 1987). The first of these is derived from Newton s gradient method and numerous improvements on this method have been developed. The second method uses direct search techniques, one of which, the Nelder-Mead search algorithm, is derived from a simplex-like approach. Many of these methods are part of important mathematical computer-based program packages (e.g., IMSL, BMDP, MATLAB) or are available through other important mathematical program packages (e.g., IMSL). [Pg.108]

There is a variety of general purpose unconstrained optimization methods that can be used to estimate unknown parameters. These methods are broadly classified into two categories direct search methods and gradient methods (Edgar and Himmelblau, 1988 Gill et al. 1981 Kowalik and Osborne, 1968 Sargent, 1980 Reklaitis, 1983 Scales, 1985). [Pg.67]

Quite often the direction determined by the Gauss-Newton method, or any other gradient method for that matter, is towards the optimum, however, the length of the suggested increment of the parameters could be too large. As a result, the value of the objective function at the new parameter estimates could actually be higher than its value at the previous iteration. [Pg.139]

Bard, Y "Comparison of Gradient Methods for the Solution of Nonlinear Parameter Estimation Problems", SIAM J. Sumer. Anal., 7,157-186 (1970). [Pg.391]

The nonlinear programming problem based on objective function (/), model equations (b)-(g), and inequality constraints (was solved using the generalized reduced gradient method presented in Chapter 8. See Setalvad and coworkers (1989) for details on the parameter values used in the optimization calculations, the results of which are presented here. [Pg.504]

Y, Bard, Comparison of gradient methods for the solution of nonlinear parameter estimation problems, SIAM J. Numer. Anal. 7 (1970) 157-186. [Pg.218]

The PES in the vicinity of IRC is approximated by an (N - -dimensional parabolic valley, whose parameters are determined by using the gradient method. Specific numerical schemes taking into account p previous steps to determine the (p + l)th step render the Euler method stable and allow one to optimize the integration step in Eq. (8.5) [Schmidt et al., 1985]. When the IRC is found, the changes of transverse normal vibration frequencies along this reaction path are represented as... [Pg.266]

From the Optimize tool, select Full MM3(96) parameters from Method option and 1.0 (default) from Gradient option. [Pg.298]

The optimization can be carried out by several methods of linear and nonlinear regression. The mathematical methods must be chosen with criteria to fit the calculation of the applied objective functions. The most widely applied methods of nonlinear regression can be separated into two categories methods with or without using partial derivatives of the objective function to the model parameters. The most widely employed nonderivative methods are zero order, such as the methods of direct search and the Simplex (Himmelblau, 1972). The most widely used derivative methods are first order, such as the method of indirect search, Gauss-Seidel or Newton, gradient method, and the Marquardt method. [Pg.212]

In addition to the quantum approaches mentioned above, classical optimal control theories based on classical mechanics have also been developed [3-6], These methods control certain classical parameters of the system like the average nuclear coordinates and the momentum. The optimal laser held is given as an average of particular classical values with respect to the set of trajectories. The system of equations is solved iteratively using the gradient method. The classical OCT deals only with classical trajectories and thus incurs much lower computational costs compared to the quantum OCT. However, the effects of phase are not treated properly and the quantum mechanical states cannot be controlled appropriately. For instance, the selective excitation of coupled states cannot be controlled via the classical OCT and the spectrum of the controlling held does not contain the peaks that arise from one- and multiphoton transitions between quantum discrete states. [Pg.120]

We turn now to the problem of optimizing the non-linear parameters in a wavefunction. As mentioned in the introduction, for non-linear parameters (such as orbital exponents or nuclear positions) traditionally, non-derivative methods of optimization are used. However, if we wish to use a gradient method, for example, we must be able to obtain the required derivatives, subject to the constraints on the non-linear parameters and also subject to the condition that the constraints on the linear parameters continue to be bound during the variant of the non-linear parameters. In the usual closed-shell case, Fletcher5 showed how the linear constraint restriction could be incorporated, providing that one started from a minimum in the linear parameters. Assuming for the moment no particular constraints on the non-linear variables, then starting from a linear-minimum it is easy to see that... [Pg.53]

Various more-or-less efficient optimization strategies have been developed [46, 47] and can be classified into direct search methods and gradient methods. The direct search methods, like those of Powell [48], of Rosenbrock and Storey [49] and of Nelder and Mead ( Simplex ) [50] start from initial guesses and vary the parameter values individually or combined thereby searching for the direction to the minimum SSR. [Pg.316]

The gradient methods, like those of Newton, Gauss-Newton, Fletcher, and Levenberg-Marquardt, use the derivative vector of the SSR with respect to the parameter directions to determine the direction where this gradient changes most, the steepest-descent direction. [Pg.316]


See other pages where Parameter gradient methods is mentioned: [Pg.101]    [Pg.192]    [Pg.208]    [Pg.50]    [Pg.308]    [Pg.77]    [Pg.39]    [Pg.64]    [Pg.287]    [Pg.287]    [Pg.328]    [Pg.120]    [Pg.223]    [Pg.147]    [Pg.377]    [Pg.39]    [Pg.529]    [Pg.293]    [Pg.410]    [Pg.94]    [Pg.140]    [Pg.148]    [Pg.436]    [Pg.68]    [Pg.44]    [Pg.120]    [Pg.208]    [Pg.90]    [Pg.258]    [Pg.91]   
See also in sourсe #XX -- [ Pg.139 , Pg.147 ]




SEARCH



Gradient method

Gradient parameters

Method parameters

© 2024 chempedia.info