Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization search methods

GA is a new parallel optimization search method which is different from the traditional optimization methods in the field of application. Goldberg [114] summarized the differences between GA and traditional optimization method as follows GA operates the code of the solution set not the solution set itself GA searches from one population, not a single solution GA uses the compensation information (fitness function), not derivatives or other complementation knowledge GA uses methods of probability, not the deterministic rule of state transition. [Pg.30]

Minimum-time joint trajectory is a constrained non-linear optimization problem with a single objective function. The optimization procedure used in this work is the non-linear optimization search method with goal programming based on the Modified Hooke and Jeeves Direct Search Method [13]. [Pg.503]

With many variables and constraints, linear and nonlinear programming may be applicable, as well as various numerical gradient search methods. Maximum principle and dynamic programming are laborious and have had only limited applications in this area. The various mathematical techniques are explained and illustrated, for instance, by Edgar and Himmelblau Optimization of Chemical Processes, McGraw-Hill, 1988). [Pg.705]

Nonlinear Programming The most general case for optimization occurs when both the objective function and constraints are nonlinear, a case referred to as nonlinear programming. While the idea behind the search methods used for unconstrained multivariable problems are applicable, the presence of constraints complicates the solution procedure. [Pg.745]

Finding the minimum of the hybrid energy function is very complex. Similar to the protein folding problem, the number of degrees of freedom is far too large to allow a complete systematic search in all variables. Systematic search methods need to reduce the problem to a few degrees of freedom (see, e.g.. Ref. 30). Conformations of the molecule that satisfy the experimental bounds are therefore usually calculated with metric matrix distance geometry methods followed by optimization or by optimization methods alone. [Pg.257]

To overcome the limitations of the database search methods, conformational search methods were developed [95,96,109]. There are many such methods, exploiting different protein representations, objective function tenns, and optimization or enumeration algorithms. The search algorithms include the minimum perturbation method [97], molecular dynamics simulations [92,110,111], genetic algorithms [112], Monte Carlo and simulated annealing [113,114], multiple copy simultaneous search [115-117], self-consistent field optimization [118], and an enumeration based on the graph theory [119]. [Pg.286]

More or less automatic ways of finding an optimum are described in Appendix 6. The simplest of these by far is the random search method. It can be used for any number of optimization variables. It is extremely inefficient from the viewpoint of the computer but is joyously simple to implement. The following program fragment illustrates the method. [Pg.194]

There is a variety of general purpose unconstrained optimization methods that can be used to estimate unknown parameters. These methods are broadly classified into two categories direct search methods and gradient methods (Edgar and Himmelblau, 1988 Gill et al. 1981 Kowalik and Osborne, 1968 Sargent, 1980 Reklaitis, 1983 Scales, 1985). [Pg.67]

One of the most reliable direct search methods is the LJ optimization procedure (Luus and Jaakola, 1973). This procedure uses random search points and systematic contraction of the search region. The method is easy to program and handles the problem of multiple optima with high reliability (Wang and Luus, 1977, 1978). A important advantage of the method is its ability to handle multiple nonlinear constraints. [Pg.79]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

The techniques most widely used for optimization may be divided into two general categories one in which experimentation continues as the optimization study proceeds, and another in which the experimentation is completed before the optimization takes place. The first type is represented by evolutionary operations and the simplex method, and the second by the more classic mathematical and search methods. (Each of these is discussed in Sec. V.)... [Pg.609]

In contrast with the mathematical optimization methods, search methods do not require continuity or differentiability of the function—only that it be... [Pg.613]

Although the Lagrangian method was able to handle several responses or dependent variables, it was generally limited to two independent variables. A search method of optimization was also applied to a pharmaceutical system and was reported by Schwartz et al. [17], It takes five independent variables into... [Pg.615]

From the data resulting from the required number of experiments, one is able to generate a mathematical model to which the appropriate optimization technique is applied (e.g., graphic, mathematical, or the search method). [Pg.625]

The problem of multivariable optimization is illustrated in Figure 3.4. Search methods used for multivariable optimization can be classified as deterministic and stochastic. [Pg.38]

One fundamental practical difficulty with both the direct and indirect search methods is that, depending on the shape of the solution space, the search can locate local optima, rather than the global optimum. Often, the only way to ensure that the global optimum has been reached is to start the optimization from different initial points and repeat the process. [Pg.40]

Stochastic search methods. In all of the optimization methods discussed so far, the algorithm searches the objec-... [Pg.40]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

FIG. 3-58 Examples of optimization methods without derivatives, (a) Pattern search method. (b) Random search method. O, first phase A, second phase , third phase. [Pg.65]

In this chapter we described and illustrated only a few unidimensional search methods. Refer to Luenberger (1984), Bazarra et al. (1993), or Nash and Sofer (1996) for many others. Naturally, you can ask which unidimensional search method is best to use, most robust, most efficient, and so on. Unfortunately, the various algorithms are problem-dependent even if used alone, and if used as subroutines in optimization codes, also depend on how well they mesh with the particular code. Most codes simply take one or a few steps in the search direction, or in more than one direction, with no requirement for accuracy—only that fix) be reduced by a sufficient amount. [Pg.176]


See other pages where Optimization search methods is mentioned: [Pg.100]    [Pg.153]    [Pg.100]    [Pg.153]    [Pg.668]    [Pg.485]    [Pg.78]    [Pg.360]    [Pg.207]    [Pg.312]    [Pg.313]    [Pg.172]    [Pg.89]    [Pg.165]    [Pg.39]    [Pg.40]    [Pg.42]    [Pg.43]    [Pg.247]    [Pg.64]    [Pg.65]    [Pg.65]    [Pg.66]    [Pg.163]    [Pg.155]    [Pg.183]    [Pg.382]    [Pg.385]    [Pg.390]    [Pg.407]    [Pg.443]    [Pg.564]   
See also in sourсe #XX -- [ Pg.27 ]




SEARCH



Optimization methods

Optimized method

Search methods

Searching methods

© 2024 chempedia.info