Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization steepest descent

The most popular optimization techniques are Newton-Raphson optimization, steepest ascent optimization, steepest descent optimization. Simplex optimization. Genetic Algorithm optimization, simulated annealing. - Variable reduction and - variable selection are also among the optimization techniques. [Pg.62]

The SCF wavefunctions from which the electron density is fitted was calculated by means of the GAUSSIAN-90 system of programs [18]. The program QMOLSIM [3] used in the computation of the MQSM allows optimization of the mutual orientation of the two systems studied in order to maximize their similarity by the common steepest-descent, Newton and quasi-Newton algorithms [19]. The DIIS procedure [20] has been also implemented for the steepest-descent optimizations in order to improve the performance of this method. The MQSM used in the optimization procedure are obtained from fitted densities. This speeds the process. The exact MQSM were obtained from the molecular orientation obtained in this optimization procedure. [Pg.42]

The modelling of the multiple scattering requires input of all atomic positions, so that the trial-and-error approach must be followed one guesses reasonable models for the surface stmcture, and tests them one by one until satisfactory agreement with experiment is obtained. For simple stmctures and in cases where stmctural information is already known from other sources, this process is usually quite quick only a few basic models may have to be checked, e.g. adsorption of an atomic layer in hollow, bridge or top sites at positions consistent with reasonable bond lengths. It is then relatively easy to refine the atomic positions within the best-fit model, resulting in a complete stmctural determination. The refinement is normally performed by some form of automated steepest-descent optimization, which allows many atomic positions to be adjusted simultaneously [21] Computer codes are also available to accomplish this part of the analysis [25]. The trial-and-error search with refinement may take minutes to hours on current workstations or personal computers. [Pg.1770]

Both the simplex and steepest descent optimizations began at 70 °C and 180 s. From there, the simplex method moved toward conditions at higher temperature and lower residence time. Upon reaching the minimum residence time constraint, the simplex contracted, ultimately becoming a one-dimensional search until it reached the optimiun of 99 °C and 30 s near the corner of the design space after 30 experiments. The SNOBFIT algorithm initially performed a randomized, space-fining... [Pg.86]

Aq becomes asymptotically a g/ g, i.e., the steepest descent fomuila with a step length 1/a. The augmented Hessian method is closely related to eigenvector (mode) following, discussed in section B3.5.5.2. The main difference between rational fiinction and tmst radius optimizations is that, in the latter, the level shift is applied only if the calculated step exceeds a threshold, while in the fonuer it is imposed smoothly and is automatically reduced to zero as convergence is approached. [Pg.2339]

The advan tage ol a conjugate gradien t m iniim/er is that it uses th e minim i/ation history to calculate the search direction, and converges t asLer Lhan the steepest descent technique. It also contains a scaling factor, b, for determining step si/e. This makes the step si/es optimal when compared to the steepest descent lechniciue. [Pg.59]

Example Com pare the steps of a conjugate gradien t min im i/ation with the steepest descent method.. A molecular system can reach a potential minimum after the second step if the first step proceeds from, A to B. If the first step is too large, placing the system at D, the second step still places the system near the tninimum(K ) because the optim i/,er remembers the penultimate step. [Pg.59]

Steepest Descent is the simplest method of optimization. The direction of steepest descen t. g, is just th e n egative of the gradien t vector ... [Pg.303]

HyperChem supplies three types of optimizers or algorithms steepest descent, conjugate gradient (Hetcher-Reeves and Polak-Ribiere), and block diagonal (Newton-Raphson). [Pg.58]

The steepest descent by steps method may provide a reasonably good method to begin an optimization when the starting point is far from the minimum. However, it converges slowly near the minimum and it is principally recommended only to initiate optimization when the starting point is particularly bad. [Pg.304]

The steepest descent method is quite old and utilizes the intuitive concept of moving in the direction where the objective function changes the most. However, it is clearly not as efficient as the other three. Conjugate gradient utilizes only first-derivative information, as does steepest descent, but generates improved search directions. Newton s method requires second derivative information but is veiy efficient, while quasi-Newton retains most of the benefits of Newton s method but utilizes only first derivative information. All of these techniques are also used with constrained optimization. [Pg.744]

The gradient can be used to optimize the weight vector according to the method of steepest-descent ... [Pg.8]

Ok) function is sought by repeatedly determining the direction of steepest descent (maximum change in for any change in the coefficients a,), and taking a step to establish a new vertex. A numerical example is found in Table 1.26. An example of how the simplex method is used in optimization work is given in Ref. 143. [Pg.159]

The method of steepest descent uses only first-order derivatives to determine the search direction. Alternatively, Newton s method for single-variable optimization can be adapted to carry out multivariable optimization, taking advantage of both first- and second-order derivatives to obtain better search directions1. However, second-order derivatives must be evaluated, either analytically or numerically, and multimodal functions can make the method unstable. Therefore, while this method is potentially very powerful, it also has some practical difficulties. [Pg.40]

As explained above, the QM/MM-FE method requires the calculation of the MEP. The MEP for a potential energy surface is the steepest descent path that connects a first order saddle point (transition state) with two minima (reactant and product). Several methods have been recently adapted by our lab to calculate MEPs in enzymes. These methods include coordinate driving (CD) [13,19], nudged elastic band (NEB) [20-25], a second order parallel path optimizer method [25, 26], a procedure that combines these last two methods in order to improve computational efficiency [27],... [Pg.58]

An extension of the linearization technique discussed above may be used as a basis for design optimization. Such an application to natural gas pipeline systems was reported by Flanigan (F4) using the so-called constrained derivatives (W4) and the method of steepest descent. We offer a more concise derivation of this method following a development by Bryson and Ho (B14). [Pg.174]


See other pages where Optimization steepest descent is mentioned: [Pg.58]    [Pg.38]    [Pg.124]    [Pg.88]    [Pg.58]    [Pg.38]    [Pg.124]    [Pg.88]    [Pg.91]    [Pg.2335]    [Pg.2335]    [Pg.163]    [Pg.173]    [Pg.181]    [Pg.122]    [Pg.99]    [Pg.246]    [Pg.61]    [Pg.122]    [Pg.122]    [Pg.238]    [Pg.74]    [Pg.319]    [Pg.321]    [Pg.65]    [Pg.690]    [Pg.52]    [Pg.241]    [Pg.143]    [Pg.62]    [Pg.326]    [Pg.53]    [Pg.133]   
See also in sourсe #XX -- [ Pg.122 , Pg.303 ]

See also in sourсe #XX -- [ Pg.122 , Pg.303 ]




SEARCH



Optimization Algorithms steepest descent

Optimization techniques steepest descent method

Steepest Descent optimization method

Steepest descent

© 2024 chempedia.info