Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

One-dimensional search

Techniques for unconstrained and constrained optimization problems generally involve repeated use of a one-dimensional search as described in Chapters 6 and 8. [Pg.153]

Apply a sequential one-dimensional search technique to reduce the interval of uncertainty for the maximum of the function/ = 6.64 + 1.2 - x2 from [0,1] to less than 2 percent of its original size. Show all the iterations. [Pg.177]

Another simple optimization technique is to select n fixed search directions (usually the coordinate axes) for an objective function of n variables. Then fix) is minimized in each search direction sequentially using a one-dimensional search. This method is effective for a quadratic function of the form... [Pg.185]

Line search. The oldest and simplest method of calculating a to obtain Ax is via a unidimensional line search. In a given direction that reduces/(x), take a step, or a sequence of steps yielding an overall step, that reduces/(x) to some acceptable degree. This operation can be carried out by any of the one-dimensional search... [Pg.204]

One-dimensional search Multistage evaporator (11.3) Reflux ratio of distillation column (12.4) Fixed-bed filter (13.3) ... [Pg.416]

In this example we illustrate the application of a one-dimensional search technique from Chapter 5 to a problem posed by Martin and coworkers (1981) of obtaining the optimal reflux ratio in a distillation column. [Pg.454]

Solution. Based on the data in Table E12.4A we minimized / with respect to R using a quadratic interpolation one-dimensional search (see Chapter 5). The value of Rm from Equation (a) was 11.338. The initial bracket was 12 < R < 20, and R = 16, 18, and 20 were selected for the initial three points. The convergence tolerance on the optimum required that/ should not change by more than 0.01 from one iteration to the next. [Pg.457]

In all, 10 one-dimensional searches were carried out, and 54 objective function calls and 111 gradient calls (numerical differences were used) were made by the code. [Pg.492]

Unconstrained u(k) A is varied using a one-dimensional search (external to the MPC program) to find a good response that satisfies the input constraints in step 2. [Pg.571]

Sequential quadratic programming. A sequential quadratic programming (SQP) technique involves the resolution of a sequence of explicit quadratic programming (QP) subproblems. The solution of each subproblem produces the search direction d that has to be taken to reach the next iterate zk+i from the current iterate zk. A one-dimensional search is then accomplished in the direction dt to obtain the optimal step size. [Pg.104]

First the search direction, d, in the space of the independent variables is determined from the elements of dependent variables are evaluated as functions of i using the process constraints. [Pg.105]

The optimum seeking methods which have been found to be particularly useful are the modified Fibonacci search (search by golden section) for one-dimensional searches and the Hooke-Jeeves search for multi-dimensional searches. Beveridge and Schechter (8) give a complete description of these searches. [Pg.100]

In Heidemann and Khalil (14) additional detail is given about numerical procedures which were found effective in solving these equations. The overall solution strategy, as described above, requires nested one-dimensional searches the critical volume is found by solving (23), but at each volume (16) must be solved for the temperature. The multiplier (T/100) in each term of the matrix Q and the multiplier [(v-b)/2bj 2 in the cubic form were introduced to improve the behavior of the numerical methods. [Pg.383]

For three or more stages the procedure is a direct extension of that presented here, and it still remains a one-dimensional search. This procedure was first developed by Konoki (1956a) and later, independently, by Horn (1961a). [Pg.432]

If we start at a minimum this must be a saddle point. This observation is the basis for the GE algorithm Transition states are determined by carrying out one-dimensional searches along GEs starting at a minimum. Since n GEs pass through each stationary point, there are 2n directions along which we may carry out such line searches. [Pg.318]

The only parameter that has been fixed in the above three sequential stages is the HRAT. We can subsequently update the H RAT by performing a one-dimensional search using the golden section search method, which is shown as the outside loop in Figure 8.20. [Pg.323]

The positions, < >, were determined as follows. Since the < > are known approximately, the linear equations (6) were solved assuming no layer line splitting. Then, for each reflection, a limited, one-dimensional search was made for a position which fit the measured intensities better. The angular widths, a, were also allowed to vary, since splitting leads to an apparent increase in the measured angular widths of the layer lines. [Pg.143]

The selection of a method for one-dimensional search is based on the tradeoff between the number of function evaluations and computer time. We can find the optimum by evaluating the objective function at many values of x, using a small grid spacing (Ax) over the allowable range of x values, but this method is generally inefficient. There are three classes of techniques that can be used efficiently for one-dimensional search indirect, region elimination, and interpolation. [Pg.34]

A one-dimensional search optimization technique, such as the Fibonacci search, is employed to minimize Equation 8-113. A computer program (PROG81) was developed to estimate the equivalent number of ideal tanks N for the given effluent tracer response versus time data. Additionally, the program calculates the mean residence time, variance, dimensionless variance, dispersion number, and the Peclet number. [Pg.722]

Univariate Search. A variant on the multivariate grid search is the univariate search, sometimes called cyclic search, which again has had a long history in the context of nuclear position and orbital exponent variation. This method is based on the idea that the individual variables refer to co-ordinate axes ei= [1, 0, 0. ., 0]T etc., in the n space, and we can thus perform successive one-dimensional searches along each of the axes. The algorithm is ... [Pg.39]

In fact it is generally not even necessary to know A explicitly in order to construct a set of conjugate directions, nor is it generally necessary to know the gradiants explicitly cither. That the gradients are not necessary is easy to see, because the condition of equation (23) is simply the condition that the function is minimized along the line pi from the point ai and that minimum may easily be found by an ordinary one-dimensional search. That equation (23) is equivalent to a linear minimum follows because the linear minimum condition is just that 2 shall be chosen so that... [Pg.42]

In the one-dimensional search methods there are two principle variations some methods employ only first derivatives of the given function (the gradient methods), whereas others (Newton s method and its variants) require explicit knowledge of the second derivatives. The methods in this last category have so far found very limited use in quantum chemistry, so that we shall refer to them only briefly at the end of this section, and concentrate on the gradient methods. The oldest of these is the method of steepest descent. [Pg.43]

In the classic Newton method, the Newton direction is used to update each previous iterate by the formula xfe+1 = x + pfe, until convergence. The reader may recognize the one-dimensional version of Newton s method for solving a nonlinear equation f(x) = 0 x +1 = xk — f(xk)/f (xk). The analogous iteration process for minimizing f x) is x +1 — xk — f xk)lf"(xk). Note that the one-dimensional search vector, -f xit)lf"(.xk), is replaced by the Newton direction -Hk lgt in the multivariate case. This direction is defined for nonsingular Hk. When x0 is sufficiently close to a solution x, quadratic convergence can be proven for Newton s method.3-6 That is, a constant 3 exists such that... [Pg.36]

Finding a PODS is a very easy task. Because the system has only two DOFs, the V = E manifolds are simply one-dimensional lines in configuration space, V(q, q2) = E. Momentum is zero on those points. Finding the self-retracing p.o. amounts to a very easy one-dimensional search. Once a p.o. is found, a linear stability determination is enough to determine the PODS character of a particular p.o. These properties have been used many times in the literature, in a classical or semiclassical, even quantum, context [6,39,43 5]. The reader is referred to the rich literature for many actual examples. The series of articles by Gaspard and Rice are particularly detailed [46]. [Pg.232]

The methods just described do not work when there is more than one independent variable. There is certainly a need for techniques which can be extended to problems with many operating variables, most industrial systems being quite complicated. We shall now consider methods which reduce an optimization problem involving many variables to a series of one-dimensional searches. For simplicity we shall discuss optimization of an unknown function y of only two independent variables a i and x2, indicating later how to extend the techniques to more general problems where possible. [Pg.286]


See other pages where One-dimensional search is mentioned: [Pg.744]    [Pg.744]    [Pg.157]    [Pg.37]    [Pg.37]    [Pg.152]    [Pg.535]    [Pg.111]    [Pg.152]    [Pg.152]    [Pg.155]    [Pg.173]    [Pg.197]    [Pg.657]    [Pg.663]    [Pg.432]    [Pg.34]    [Pg.157]    [Pg.136]    [Pg.94]    [Pg.278]    [Pg.34]   


SEARCH



© 2024 chempedia.info