Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Newton efficiency

Ponder JW, FM Richards (1987) An Efficient Newton-Like Method for Molecular Mechanics Energy Minimization of Large Molecules. J. Comput. Chem. 8 (7) 1016-1024. (See http //dasher.wustl.edu/tinker/)... [Pg.296]

OS98] Olsson H. and Soderlind G. (1998) Stage value predictors and efficient Newton iterations in implicit Runge-Kutta methods. SIAM J, Sci, Stat, Comput, to appear. [Pg.284]

Although it was originally developed for locating transition states, the EF algoritlnn is also efficient for minimization and usually perfonns as well as or better than the standard quasi-Newton algorithm. In this case, a single shift parameter is used, and the method is essentially identical to the augmented Hessian method. [Pg.2352]

D. Xie and T. Schlick. Efficient implementation of the truncated Newton method for large-scale chemistry applications. SIAM J. Opt, 1997. In Press. [Pg.260]

The root-finding method used up to this point was chosen to illustrate iterative solution, not as an efficient method of solving the problem at hand. Actually, a more efficient method of root finding has been known for centuries and can be traced back to Isaac Newton (1642-1727) (Eig. 1-2). [Pg.7]

The synchronous transit method is combined with quasi-Newton methods to find transition states. Quasi-Newton methods are very robust and efficient in finding energy minima. Based solely on local information, there is no unique way of moving uphill from either reactants or products to reach a specific reaction state, since all directions away from a minimum go uphill. [Pg.309]

The steepest descent method is quite old and utilizes the intuitive concept of moving in the direction where the objective function changes the most. However, it is clearly not as efficient as the other three. Conjugate gradient utilizes only first-derivative information, as does steepest descent, but generates improved search directions. Newton s method requires second derivative information but is veiy efficient, while quasi-Newton retains most of the benefits of Newton s method but utilizes only first derivative information. All of these techniques are also used with constrained optimization. [Pg.744]

Summarizing, the efficiency of Newton-Raphson based optimizations depends on the following factors ... [Pg.327]

Xie D, Tropsha A, Schlick T. An efficient projection protocol for chemical databases singular value decomposition combined with truncated-newton minimization. / Chem Inf Comput Sci 2000 40 167-77. [Pg.373]

As seen in Chapter 2 a suitable measure of the discrepancy between a model and a set of data is the objective function, S(k), and hence, the parameter values are obtained by minimizing this function. Therefore, the estimation of the parameters can be viewed as an optimization problem whereby any of the available general purpose optimization methods can be utilized. In particular, it was found that the Gauss-Newton method is the most efficient method for estimating parameters in nonlinear models (Bard. 1970). As we strongly believe that this is indeed the best method to use for nonlinear regression problems, the Gauss-Newton method is presented in detail in this chapter. It is assumed that the parameters are free to take any values. [Pg.49]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

In this section we first present an efficient step-size policy for differential equation systems and we present two approaches to increase the region of convergence of the Gauss-Newton method. One through the use of the Information Index and the other by using a two-step procedure that involves direct search optimization. [Pg.150]


See other pages where Newton efficiency is mentioned: [Pg.187]    [Pg.107]    [Pg.70]    [Pg.79]    [Pg.128]    [Pg.496]    [Pg.1383]    [Pg.187]    [Pg.107]    [Pg.70]    [Pg.79]    [Pg.128]    [Pg.496]    [Pg.1383]    [Pg.2338]    [Pg.2341]    [Pg.73]    [Pg.239]    [Pg.351]    [Pg.360]    [Pg.61]    [Pg.286]    [Pg.80]    [Pg.70]    [Pg.71]    [Pg.61]    [Pg.66]    [Pg.67]    [Pg.49]    [Pg.81]    [Pg.124]    [Pg.431]    [Pg.74]    [Pg.1084]    [Pg.114]    [Pg.300]    [Pg.405]    [Pg.79]    [Pg.681]    [Pg.221]    [Pg.541]    [Pg.297]   
See also in sourсe #XX -- [ Pg.82 , Pg.83 , Pg.84 , Pg.89 , Pg.90 , Pg.94 ]




SEARCH



© 2024 chempedia.info