Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Direct search optimization

The direct search for a global optimum may not uncover some of the Pareto-optimal solutions close to the overall optimum, which might be good trade-off solutions of interest to the decision maker. [Pg.257]

There is a variety of general purpose unconstrained optimization methods that can be used to estimate unknown parameters. These methods are broadly classified into two categories direct search methods and gradient methods (Edgar and Himmelblau, 1988 Gill et al. 1981 Kowalik and Osborne, 1968 Sargent, 1980 Reklaitis, 1983 Scales, 1985). [Pg.67]

One of the most reliable direct search methods is the LJ optimization procedure (Luus and Jaakola, 1973). This procedure uses random search points and systematic contraction of the search region. The method is easy to program and handles the problem of multiple optima with high reliability (Wang and Luus, 1977, 1978). A important advantage of the method is its ability to handle multiple nonlinear constraints. [Pg.79]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

In this section we first present an efficient step-size policy for differential equation systems and we present two approaches to increase the region of convergence of the Gauss-Newton method. One through the use of the Information Index and the other by using a two-step procedure that involves direct search optimization. [Pg.150]

A simple procedure to overcome the problem of the small region of convergence is to use a two-step procedure whereby direct search optimization is used to initially to bring the parameters in the vicinity of the optimum, followed by the Gauss-Newton method to obtain the best parameter values and estimates of the uncertainty in the parameters (Kalogerakis and Luus, 1982). [Pg.155]

For example let us consider the estimation of the two kinetic parameters in the Bodenstein-Linder model for the homogeneous gas phase reaction of NO with 02 (first presented in Section 6.5.1). In Figure 8.4 we see that the use of direct search (LJ optimization) can increase the overall size of the region of convergence by at least two orders of magnitude. [Pg.155]

Luus, R., and T.H.I. Jaakola, "Optimization by Direct Search and Systematic Reduction of the Search Region", AlChEJ, 19, 760 (1973). [Pg.398]

All the algebraic and geometric methods for optimization presented so far work when either there is no experimental error or it is smaller than the usual absolute differences obtained when the objective functions for two neighboring points are subtracted. When this is not the case, the direct search and gradient methods can cause one to go in circles, and the geometric method may cause the region containing the maximum to be eliminated from further consideration. [Pg.406]

We start with continuous variable optimization and consider in the next section the solution of NLP problems with differentiable objective and constraint functions. If only local solutions are required for the NLP problem, then very efficient large-scale methods can be considered. This is followed by methods that are not based on local optimality criteria we consider direct search optimization methods that do not require derivatives as well as deterministic global optimization methods. Following this, we consider the solution of mixed integer problems and outline the main characteristics of algorithms for their solution. Finally, we conclude with a discussion of optimization modeling software and its implementation on engineering models. [Pg.60]

Derivative-free Optimization (DFO) In the past decade, the availability of parallel computers and faster computing hardware and the need to incorporate complex simulation models within optimization studies have led a number of optimization researchers to reconsider classical direct search approaches. In particular, Dennis and Torczon [SIAM J. Optim. 1 448 (1991)] developed a multidimensional search algorithm that extends the simplex approach of Nelder... [Pg.65]

This condensation helps one understand why the yield of pyrroles from ketoximes and acetylene is reduced in some cases and consequently allows a more directed search for ways to overcome this obstacle. Optimization of this side reaction would make possible a one-pot preparation of valuable dipyrroles with cyclopropyl or vinyl substituents, such as 98a,c for example. [Pg.259]

The optimization can be carried out by several methods of linear and nonlinear regression. The mathematical methods must be chosen with criteria to fit the calculation of the applied objective functions. The most widely applied methods of nonlinear regression can be separated into two categories methods with or without using partial derivatives of the objective function to the model parameters. The most widely employed nonderivative methods are zero order, such as the methods of direct search and the Simplex (Himmelblau, 1972). The most widely used derivative methods are first order, such as the method of indirect search, Gauss-Seidel or Newton, gradient method, and the Marquardt method. [Pg.212]

Statistical optimization methods other than the Simplex algorithm have only occasionally been used in chromatography. Rafel [513] compared the Simplex method with an extended Hooke-Jeeves direct search method [514] and the Box-Wilson steepest ascent path [515] after an initial 23 full factorial design for the parameters methanol-water composition, temperature and flowrate in RPLC. Although they concluded that the Hooke-Jeeves method was superior for this particular case, the comparison is neither representative, nor conclusive. [Pg.187]

This relationship is used in an application to a simple binary system to balance the trade-offs between inefficiency (fuel costs) and capital investment. The Second Law optimization yields results identical to those obtained from a traditional direct-search technique. [Pg.289]

Non-derivative Methods.—Multivariate Grid Search. The oldest of the direct search methods is the multivariate grid search. This has a long history in quantum chemistry as it has been the preferred method in optimizing the energy with respect to nuclear positions and with respect to orbital exponents. The algorithm for the method is very simple. In this and subsequent algorithms we use x to indicate the variables and a to indicate a chosen point. [Pg.39]

Various more-or-less efficient optimization strategies have been developed [46, 47] and can be classified into direct search methods and gradient methods. The direct search methods, like those of Powell [48], of Rosenbrock and Storey [49] and of Nelder and Mead ( Simplex ) [50] start from initial guesses and vary the parameter values individually or combined thereby searching for the direction to the minimum SSR. [Pg.316]

Search problems can be divided into two groups, depending on whether or not random experimental error is associated with each measurement. There are, indeed, significant problems that have no experimental error as when the function in question is given as an exact mathematical expression, but one too complicated to be optimized directly by calculus or by known methods of mathematical programming. Design problems are often of this latter nature. We shall discuss mainly the no-error case, since its principles are simple and can be used even in the presence of experimental error. [Pg.278]

Watson, 1968 Rudd, 1968 Masso and Rudd, 1969). Algorithmic methods for selecting the optimal configuration from a given superstructure also began to be developed through the use of direct search methods for continuous variables (Umeda et al, 1972 Ichikawa and Fan, 1973) as well as branch and bound search methods (Lee et al, 1970). [Pg.173]


See other pages where Direct search optimization is mentioned: [Pg.11]    [Pg.79]    [Pg.238]    [Pg.331]    [Pg.690]    [Pg.262]    [Pg.12]    [Pg.39]    [Pg.54]    [Pg.65]    [Pg.66]    [Pg.140]    [Pg.175]    [Pg.204]    [Pg.198]    [Pg.238]    [Pg.408]    [Pg.140]    [Pg.96]    [Pg.259]    [Pg.133]    [Pg.136]    [Pg.395]    [Pg.284]    [Pg.15]    [Pg.179]    [Pg.398]    [Pg.615]    [Pg.616]   
See also in sourсe #XX -- [ Pg.39 ]




SEARCH



Direct optimization

Direct search

Directed optimization

Optimizing control search direction

Search direction

© 2024 chempedia.info