Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Univariate search

Univariate Search. A variant on the multivariate grid search is the univariate search, sometimes called cyclic search, which again has had a long history in the context of nuclear position and orbital exponent variation. This method is based on the idea that the individual variables refer to co-ordinate axes ei= [1, 0, 0. ., 0]T etc., in the n space, and we can thus perform successive one-dimensional searches along each of the axes. The algorithm is  [Pg.39]

It is perhaps fairly obvious that if the variables of the problem are strongly dependent, then a minimum need not emerge from this process, and that [Pg.39]

The crucial step in the univariate search procedure is undoubtedly minimizing along the line ei. In situations where the gradient of the function along the line is not readily available, direct search procedures along the given line (i.e. one-dimensional direct search procedures) must be employed to find the minimum. Many such procedures are available (see, e.g. Cooper and Steinberg6 pp. 136-151), but one of the more efficient procedures seems to be quadratic interpolation. This may briefly be described as follows. [Pg.40]

If we wish to find the minimum of f(x) in the direction r, we construct the function about the point a, [Pg.40]

This point may be accepted if F(a)=gF(A2), or a can be used together with A2 as initial points for a re-estimate of the minimum. Some care must be exercised in the use of this method to avoid rounding error in the cases where Ai Aa x A3 and/or F(Ai) F(A2) x F(A3) and the difficulties encountered in such situations are discussed in section 7.3 of ref. 8. [Pg.40]


A more sophisticated version of the sequential univariate search, the Fletcher-Powell, is actually a derivative method where elements of the gradient vector g and the Hessian matrix H are estimated numerically. [Pg.236]

More elaborate techniques have been published in the literature to obtain optimal or near optimal stepping parameter values. Essentially one performs a univariate search to determine the minimum value of the objective function along the chosen direction (Ak ) by the Gauss-Newton method. [Pg.52]

Figure 13.23 Optimization of reactor conversion and recycle impurity concentration using a univariate search. Figure 13.23 Optimization of reactor conversion and recycle impurity concentration using a univariate search.
Execution of a univariate search on two different quadratic functions. [Pg.185]

Experience has shown that conjugate directions are much more effective as search directions than arbitrarily chosen search directions, such as in univariate search, or... [Pg.186]

Random Search / 6.1.2 Grid Search / 6.1.3 Univariate Search / 6.1.4 Simplex Search Method / 6.1.5 Conjugate Search Directions / 6.1.6 Summary... [Pg.657]

The experimental strategy used here is to perform a series of small experiments instead of a single comprehensive experiment. An univariate search was made in which only one variable was changed at a time. The information obtained in the earlier experiments performed during the univariant... [Pg.196]

In the first step of the univariate search a series of experiments were performed in which base values were used for the initial hydrogen partial pressure, reaction time and reaction temperature, and only the amount of catalyst used was varied. The amount of catalyst which yielded the best performance (i.e., maximum selectivity) and best satisfied practical constraints was selected. In the next step a series of experiments was performed in which the selected amount of catalyst was used, base values were used for temperature and time, and only the initial hydrogen partial pressure was varied. An initial hydrogen partial pressure was selected as was done for the amount of catalyst in the first step. [Pg.197]

The dependence of selectivity (Se) on the reaction time and temperature was modeled using empirical expressions for desulfurization and hydrogen consumption kinetics. The same values selected for initial hydrogen partial pressure and amount of catalyst in the first two steps of the univariate search were used in determining these kinetic expressions. [Pg.197]

An univariate search does not necessarily lead to an optimum for a multivariable space that is, convergence to an optimum is not guaranteed. For this reason a series of experiments were made to map the region close to the identified optimum conditions to test whether a local optimum, at least, had been located. [Pg.197]

The complexity of the response surface is what makes the optimization of chromatographic selectivity stand out as a particular optimization problem rather than as an example to which known optimization strategies from other fields can be readily applied. This is illustrated by the application of univariate optimization. In univariate optimization (or univariate search) methods the parameters of interest are optimized sequentially. An optimum is located by varying a given parameter and keeping all other parameters constant. In this way, an optimum value is obtained for that particular parameter. From this moment on the optimum value is assigned to the parameter and another parameter is varied in order to establish its optimum value . [Pg.173]

In the actual pattern search method the exploratory moves are made in a way very similar to the univariate search method however, instead of minimizing along the line, we proceed as follows ... [Pg.41]

It is perhaps useful to think of the pattern search method as an attempt to combine the certainty of the multivariate grid method with the ease of the univariate search method, in the sense that it seeks to avoid the enormous numbers of function evaluations inherent in the grid method, without getting involved in the (possibly fruitless and misleading) process of optimizing the variables separately. [Pg.41]

Figure 1.15. Search methods, (a) Univariate search, (b) Steepest descent. Figure 1.15. Search methods, (a) Univariate search, (b) Steepest descent.
If xi and X2 are varied one at a time, then the method is known as a univariate search and is the same as carrying out successive line searches. If the step length is determined so as to find the minimum with respect to the variable searched, then the calculation steps toward the optimum, as shown in Figure 1.15a. This method is simple to implement, but can be very slow to converge. Other direct methods include pattern searches such as the factorial designs used in statistical design of experiments (see, for example, Montgomery, 2001), the EVOP method (Box, 1957) and the sequential simplex method (Spendley et ah, 1962). [Pg.32]

Through a univarient search technique (Wilde, 1964) and simple iterations, the maximum of equation (7.10) can be obtained. Therefore, the optimization problem can always be reduced to an optimal control problem of the form ... [Pg.467]

As noted in the introduction, energy-only methods are generally much less efficient than gradient-based techniques. The simplex method [9] (not identical with the similarly named method used in linear programming) was used quite widely before the introduction of analytical energy gradients. The intuitively most obvious method is a sequential optimization of the variables (sequential univariate search). As the optimization of one variable affects the minimum of the others, the whole cycle has to be repeated after all variables have been optimized. A one-dimensional minimization is usually carried out by finding the... [Pg.2333]

Note that the two equality constraints, Eqs. (18.10) and (18.11), permit two of the decision variables to be eliminated, leaving just one decision variable, say V. Consequently, the objective function, Eq. (18.9), can be minimized easily using a univariable search. If the problem statement is altered to place a lower bound on the beer concentration of 4%, Eq. (18.10) becomes an inequality constraint ... [Pg.625]

The sequential univariate search or axial iteration method changes one... [Pg.259]

Fig. 1. Steps in finding the minimum on a quadratic surface using the sequential univariate search... Fig. 1. Steps in finding the minimum on a quadratic surface using the sequential univariate search...
In each step of the univariate search method, p — 1 parameters of the previous estimate vector remain fixed, while a minimum of the objective function is searched for by varying only the remaining parameter over a predetermined range. Starting from the initial parameter vector bo, in p steps all parameters are optimized in a fixed order, after which this process is repeated until each additional decrease of the objective function becomes negligible. [Pg.287]

Fig. 9.1 illustrates this for the case of two parameters. Through a univariate search, the... [Pg.287]


See other pages where Univariate search is mentioned: [Pg.2333]    [Pg.2334]    [Pg.236]    [Pg.39]    [Pg.39]    [Pg.39]    [Pg.284]    [Pg.185]    [Pg.212]    [Pg.124]    [Pg.126]    [Pg.140]    [Pg.236]    [Pg.55]    [Pg.32]    [Pg.127]    [Pg.2334]    [Pg.259]    [Pg.199]    [Pg.200]    [Pg.287]   
See also in sourсe #XX -- [ Pg.185 ]




SEARCH



Optimization univariate search

Univariant

Univariate search method

© 2024 chempedia.info