Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Fletcher optimization

Convergence behaviour of step-restricted augmented Hessian calculations (Fletcher optimization). [Pg.9]

Fletcher R 1981 Practical Methods of Optimization Vol 1—Unconstrained Optimization (New York Wley)... [Pg.2355]

Within some programs, the ROMPn methods do not support analytic gradients. Thus, the fastest way to run the calculation is as a single point energy calculation with a geometry from another method. If a geometry optimization must be done at this level of theory, a non-gradient-based method such as the Fletcher-Powell optimization should be used. [Pg.229]

Davidsou-Fletcher-Powell (DFP) a geometry optimization algorithm De Novo algorithms algorithms that apply artificial intelligence or rational techniques to solving chemical problems density functional theory (DFT) a computational method based on the total electron density... [Pg.362]

Fletcher, R. Practical Methods of Optimization, John Wiley Sons, New York, 1980... [Pg.57]

Fletcher, R. Fractical Method of Optimization, Wiley, New York (1987). [Pg.422]

A more detailed discussion of such methods can be found in specialist books dealing with optimization (Fletcher, 1981 Powell, 1985). [Pg.239]

Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training. Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training.
Let II II denote the Euclidean norm and define = gk+i gk- Table I provides a chronological list of some choices for the CG update parameter. If the objective function is a strongly convex quadratic, then in theory, with an exact line search, all seven choices for the update parameter in Table I are equivalent. For a nonquadratic objective functional J (the ordinary situation in optimal control calculations), each choice for the update parameter leads to a different performance. A detailed discussion of the various CG methods is beyond the scope of this chapter. The reader is referred to Ref. [194] for a survey of CG methods. Here we only mention briefly that despite the strong convergence theory that has been developed for the Fletcher-Reeves, [195],... [Pg.83]

Abadie, J. and J. Carpentier. Generalization of the Wolfe Reduced Gradient Method to the Case of Nonlinear Constraints. In Optimization, R. Fletcher, ed. Academic Press, New York (1969), pp. 37-47. [Pg.328]

Do at least one simplex optimization before doing a Fletcher-Powell optimization. [Pg.81]


See other pages where Fletcher optimization is mentioned: [Pg.70]    [Pg.70]    [Pg.71]    [Pg.363]    [Pg.363]    [Pg.741]    [Pg.321]    [Pg.346]    [Pg.4]    [Pg.23]    [Pg.27]    [Pg.374]    [Pg.184]    [Pg.60]    [Pg.64]    [Pg.133]    [Pg.145]    [Pg.4]    [Pg.447]    [Pg.81]    [Pg.82]   
See also in sourсe #XX -- [ Pg.9 ]




SEARCH



Davidon-Fletcher-Powell update for optimization

Fletcher

Fletcher nonlinear optimization

© 2024 chempedia.info