Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Convergent search

Figure 7.3 Convergent search that concentrates future searches on currently most productive regions... Figure 7.3 Convergent search that concentrates future searches on currently most productive regions...
The advan tage ol a conjugate gradien t m iniim/er is that it uses th e minim i/ation history to calculate the search direction, and converges t asLer Lhan the steepest descent technique. It also contains a scaling factor, b, for determining step si/e. This makes the step si/es optimal when compared to the steepest descent lechniciue. [Pg.59]

Finally, to ensure convergence of this algorithm from poor starting points, a step size Ot is chosen along the search direction so that the point at the next iteration z = z- + Ctd) is closer to the solution of the... [Pg.486]

We will also use the results of the frequency job in the IRC calculation we ll do next. This job will enable us to verify that this transition structure connects the two minima that we think it does, and we use the keyword IRC to request it. By default, the calculation takes 6 steps in each direction, where each step corresponds to a g. jinetry optimization. However, the calculation will stop searching in a given direction once its convergence criteria are met, and an IRC calculation does not necessarily step all the way down to the minimum. [Pg.176]

There are different variants of the conjugate gradient method each of which corresponds to a different choice of the update parameter C - Some of these different methods and their convergence properties are discussed in Appendix D. The time has been discretized into N time steps (f, = / x 8f where i = 0,1, , N — 1) and the parameter space that is being searched in order to maximize the value of the objective functional is composed of the values of the electric field strength in each of the time intervals. [Pg.53]

Let II II denote the Euclidean norm and define = gk+i gk- Table I provides a chronological list of some choices for the CG update parameter. If the objective function is a strongly convex quadratic, then in theory, with an exact line search, all seven choices for the update parameter in Table I are equivalent. For a nonquadratic objective functional J (the ordinary situation in optimal control calculations), each choice for the update parameter leads to a different performance. A detailed discussion of the various CG methods is beyond the scope of this chapter. The reader is referred to Ref. [194] for a survey of CG methods. Here we only mention briefly that despite the strong convergence theory that has been developed for the Fletcher-Reeves, [195],... [Pg.83]

In spite of the good results obtained we continue our search for simple auxiliary conditions directed at ensuring that the approximated matrix is positive and that its trace has the correct value. This search is mainly focused at improving the quality of the 2-RDM obtained in terms of the 1-7 DM, which at the moment is the less precise procedure [46]. When this latter aim is fulfilled we expect that the iterative solution of the 1-order CSchE will also be successful although in this CSchE the information carried by the Hamiltonian only influences the result in an average way which probably will retard the convergence. [Pg.73]

Scales (1986) recommends the Polak Ribiere version because it has slightly better convergence properties. Scales also gives an algorithm which is used for both methods that differ only in the formula for the updating of the search vector. [Pg.77]

Direct search methods use only function evaluations. They search for the minimum of an objective function without calculating derivatives analytically or numerically. Direct methods are based upon heuristic rules which make no a priori assumptions about the objective function. They tend to have much poorer convergence rates than gradient methods when applied to smooth functions. Several authors claim that direct search methods are not as efficient and robust as the indirect or gradient search methods (Bard, 1974 Edgar and Himmelblau, 1988 Scales, 1986). However, in many instances direct search methods have proved to be robust and reliable particularly for systems that exhibit local minima or have complex nonlinear constraints (Wang and Luus, 1978). [Pg.78]


See other pages where Convergent search is mentioned: [Pg.233]    [Pg.233]    [Pg.2334]    [Pg.2335]    [Pg.2335]    [Pg.2338]    [Pg.465]    [Pg.304]    [Pg.498]    [Pg.122]    [Pg.304]    [Pg.79]    [Pg.744]    [Pg.78]    [Pg.262]    [Pg.360]    [Pg.849]    [Pg.238]    [Pg.164]    [Pg.317]    [Pg.319]    [Pg.321]    [Pg.335]    [Pg.336]    [Pg.23]    [Pg.72]    [Pg.114]    [Pg.13]    [Pg.206]    [Pg.79]    [Pg.128]    [Pg.313]    [Pg.749]    [Pg.757]    [Pg.84]    [Pg.220]    [Pg.113]    [Pg.315]    [Pg.542]    [Pg.542]    [Pg.542]   
See also in sourсe #XX -- [ Pg.233 ]




SEARCH



© 2024 chempedia.info