Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variable metric optimization method

Renormalized Davidson correction, 137 Small Curvature Semiclassical Adiabatic Thermodynamical cycle, 382 Variable metric optimization method, 321... [Pg.222]

As a rule, independent data points are required to solve a harmonic function with N variables numerically. Because a gradient is a vector N long, the best one can hope for in a gradient-based minimizer is to converge in N steps. However, if one can exploit second-derivative information, an optimization could converge in one step, because each second derivative is an V x V matrix. This is the principle behind the variable metric optimization algorithms such as Newton-Raphson method. [Pg.5]

In simple relaxation (the fixed approximate Hessian method), the step does not depend on the iteration history. More sophisticated optimization teclmiques use infonnation gathered during previous steps to improve the estimate of the minunizer, usually by invoking a quadratic model of the energy surface. These methods can be divided into two classes variable metric methods and interpolation methods. [Pg.2336]

Finding the minimum of the hybrid energy function is very complex. Similar to the protein folding problem, the number of degrees of freedom is far too large to allow a complete systematic search in all variables. Systematic search methods need to reduce the problem to a few degrees of freedom (see, e.g.. Ref. 30). Conformations of the molecule that satisfy the experimental bounds are therefore usually calculated with metric matrix distance geometry methods followed by optimization or by optimization methods alone. [Pg.257]

Owing to the constraints, no direct solution exists and we must use iterative methods to obtain the solution. It is possible to use bound constrained version of optimization algorithms such as conjugate gradients or limited memory variable metric methods (Schwartz and Polak, 1997 Thiebaut, 2002) but multiplicative methods have also been derived to enforce non-negativity and deserve particular mention because they are widely used RLA (Richardson, 1972 Lucy, 1974) for Poissonian noise and ISRA (Daube-Witherspoon and Muehllehner, 1986) for Gaussian noise. [Pg.405]

Goldfard, D., "Factorized Variable Metric Methods for Unconstrained Optimization", Mathematics of Computation, 30 (136) 796-811 (1976). [Pg.395]

Dixon, L. C. W. and L. James. On Stochastic Variable Metric Methods. In Analysis and Optimization of Stochastic Systems. Q. L. R. Jacobs et al. eds. Academic Press, London (1980). [Pg.210]

Powell, M. J. D., The convergence of variable metric methods for nonlinearly constrained optimization calculations, in Nonlinear Programming 3 (Mangasarian, O. L., Meyer, R., Robinson, S.. eds.). Academic Press, New York, 1978. [Pg.256]

However, in a quantum chemical context there is often one overwhelming difficulty that is common to both Newton-like and variable-metric methods, and that is the difficulty of storing the hessian or an approximation to its inverse. This problem is not so acute if one is using such a method in optimizing orbital exponents or internuclear distances, but in optimizing linear coefficients in LCAO type calculations it can soon become impossible. In modern calculations a basis of say fifty AOs to construct ten occupied molecular spin-orbitals would be considered a modest size, and that would, even in a closed-shell case, give one a hessian of side 500. In a Newton-like method the problem of inverting a matrix of such a size is a considerable... [Pg.57]

In summary, therefore, there is too little work with Newton-like methods to make any assertion about their utility in quantum chemistry, but there is enough work with variable-metric methods to make it possible to assert with some confidence that they are worth very serious consideration by any worker wishing to optimize orbital exponents or nuclear positions in a wavefunction. [Pg.58]

Non-linear programming is a fast growing subject and much research is being done and many new algorithms appear every year. It seems to the Reporters that the current area of major interest in the field is the area of variable-metric methods, particularly those not needing accurate linear searches. Unfortunately, from a quantum chemical point of view, such methods are liable to be of use only in exponent and nuclear position optimization and in this context, as we have seen, Newton-like methods are also worth serious consideration. [Pg.59]

The Quasi-Newton, or variable metric, methods of optimization... [Pg.252]

To obtain equilibrium geometries for small molecules and clusters we have implemented a variable metric method which is based on a quasi-Newton scheme and is widely used in optimization theory (Lipkowitz and Boyd 1993 Schlegel 1987). In this... [Pg.155]

A brief description of optimizations methods will be given (also see refs. 41-44). In contrast to other fields, in computational chemistry great effort is given to reduce the number of function evaluations since that part of the calculation is so much more time consuming. Since first derivatives are now available for almost all ab initio methods, the discussion will focus on methods where first derivatives are available. The most efficient methods, called variable metric or quasi-Newton methods, require an approximate matrix of second derivatives that can be updated with new information during the course of the optimization. Some of the more common methods have different equations for updating the second derivative matrix (also called the Hessian matrix). [Pg.44]

It is important to do local optimizations as efficiently as possible because they are a time-consuming part of minima hopping. In our implementation, we perform local minimization in two steps. The first optimization uses the limited memory L-BFGS method [64, 65] with a loose convergence threshold. The optimization is refined in the second step which uses Davidon s optimally conditioned variable metric method [66] and a more stringent convergence criterion. [Pg.28]

An optimized development therefore takes these metrics as criteria for optimization and considers both expected safety benefit as well as possible negative consequences. In order to test false-positive rates or calculate NNT, adequate testing methods with respect to real traffic and its variability are needed [9, 10]. [Pg.22]

Another reason for the success of CIO is that these methods are based on very little assumptions (or none at all) on the problem at hand. These methods are, in fact, black-box, so that virtually any input/output system (i.e. a system where inputs-the problem design variables-are mapped to one or more outputs-the problem metrics to minimize or maximize, or fitness in the evolutionary jargon) can be optimized by using them. This property is especially useful for example in many engineering, networking, or logistic problems where an explicit, closed-form mapping between inputs and outputs is not available but is often the output of a domain-specific simulator. [Pg.41]


See other pages where Variable metric optimization method is mentioned: [Pg.321]    [Pg.406]    [Pg.408]    [Pg.671]    [Pg.153]    [Pg.156]    [Pg.162]    [Pg.382]    [Pg.58]    [Pg.268]    [Pg.149]    [Pg.139]    [Pg.142]    [Pg.532]    [Pg.264]    [Pg.484]    [Pg.3120]    [Pg.390]    [Pg.316]    [Pg.179]    [Pg.248]    [Pg.279]    [Pg.570]    [Pg.4]    [Pg.400]    [Pg.125]    [Pg.250]    [Pg.314]   
See also in sourсe #XX -- [ Pg.321 ]

See also in sourсe #XX -- [ Pg.321 ]

See also in sourсe #XX -- [ Pg.321 ]




SEARCH



Method variability

Optimization methods

Optimized method

Variable metric methods

© 2024 chempedia.info