Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization Methods without Derivatives

A broad class of optimization strategies does not require derivative information. These methods have the advantage of easy implementation and little prior knowledge of the optimization problem. In particular, such methods are well suited for quick and dirty optimization studies that explore the scope of optimization for new problems, prior to investing effort for more sophisticated modeling and solution strategies. Most of these methods are derived from heuristics that naturally spawn numerous variations. As a result, a very broad literature describes these methods. Here we discuss only a few important trends in this area. [Pg.65]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

Derivative-free Optimization (DFO) In the past decade, the availability of parallel computers and faster computing hardware and the need to incorporate complex simulation models within optimization studies have led a number of optimization researchers to reconsider classical direct search approaches. In particular, Dennis and Torczon [SIAM J. Optim. 1 448 (1991)] developed a multidimensional search algorithm that extends the simplex approach of Nelder [Pg.65]

3-58 Examples of optimization methods without derivatives, (a) Pattern search method. (b) Random search method. O, first phase A, second phase , third phase. [Pg.65]

This basic concept leads to a wide variety of global algorithms, with the following features that can exploit different problem classes. Bounding strategies relate to the calculation of upper and lower bounds. For the former, any feasible point or, preferably, a locally optimal point in the subregion can be used. For the lower bound, convex relaxations of the objective and constraint functions are derived. [Pg.66]


Numerical optimizations are available for methods lacking analytic gradients (first derivatives of the energy), but they are much, much slower. Similarly, frequencies may be computed numerically for methods without analytic second derivatives. [Pg.114]

In this study, appropriate methods were investigated for the structural optimization of sulfonamide derivatives. The results showed that semiempirical methods were unable to reproduce the experimental bond lengths, bond angles, and torsion angles and that ab initio MO and DFT methods were indispensable to accurately predict the molecular structures of sulfonamide derivatives. Combining ab initio MO and DFT methods with low-level basis sets like 3-2IG, 4-3IG, and basis sets without f-type polarizations did not reproduce the experimental data, suggesting that... [Pg.342]

Indirect or variational approaches are based on Pontryagin s maximum principle [8], in which the first-order optimality conditions are derived by applying calculus of variations. For problems without inequality constraints, the optimality conditions can be written as a set of DAEs and solved as a two-point boundary value problem. If there are inequality path constraints, additional optimality conditions are required, and the determination of entry and exit points for active constraints along the integration horizon renders a combinatorial problem, which is generally hard to solve. There are several developments and implementations of indirect methods, including [9] and [10]. [Pg.546]

The derivation of these estimates espressing computational stability of iterative methods with optimal sets of Chebyshev s parameters r)) is omitted in the present book. In the sequel we involve only the collection r, allowing a simpler writing of the ensuing formulcis without concern of symbols... [Pg.674]


See other pages where Optimization Methods without Derivatives is mentioned: [Pg.65]    [Pg.552]    [Pg.615]    [Pg.564]    [Pg.627]    [Pg.65]    [Pg.552]    [Pg.615]    [Pg.564]    [Pg.627]    [Pg.70]    [Pg.26]    [Pg.578]    [Pg.298]    [Pg.298]    [Pg.620]    [Pg.224]    [Pg.193]    [Pg.632]    [Pg.4]    [Pg.346]    [Pg.137]    [Pg.144]    [Pg.193]    [Pg.183]    [Pg.1137]    [Pg.27]    [Pg.118]    [Pg.91]    [Pg.345]    [Pg.64]    [Pg.450]    [Pg.236]    [Pg.66]    [Pg.379]    [Pg.320]    [Pg.82]    [Pg.335]    [Pg.192]    [Pg.467]    [Pg.238]    [Pg.278]    [Pg.712]    [Pg.293]    [Pg.149]    [Pg.219]   


SEARCH



Derivative method

Optimization methods

Optimized derivatives

Optimized method

© 2024 chempedia.info