Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization univariate search

More elaborate techniques have been published in the literature to obtain optimal or near optimal stepping parameter values. Essentially one performs a univariate search to determine the minimum value of the objective function along the chosen direction (Ak ) by the Gauss-Newton method. [Pg.52]

Figure 13.23 Optimization of reactor conversion and recycle impurity concentration using a univariate search. Figure 13.23 Optimization of reactor conversion and recycle impurity concentration using a univariate search.
The complexity of the response surface is what makes the optimization of chromatographic selectivity stand out as a particular optimization problem rather than as an example to which known optimization strategies from other fields can be readily applied. This is illustrated by the application of univariate optimization. In univariate optimization (or univariate search) methods the parameters of interest are optimized sequentially. An optimum is located by varying a given parameter and keeping all other parameters constant. In this way, an optimum value is obtained for that particular parameter. From this moment on the optimum value is assigned to the parameter and another parameter is varied in order to establish its optimum value . [Pg.173]

It is perhaps useful to think of the pattern search method as an attempt to combine the certainty of the multivariate grid method with the ease of the univariate search method, in the sense that it seeks to avoid the enormous numbers of function evaluations inherent in the grid method, without getting involved in the (possibly fruitless and misleading) process of optimizing the variables separately. [Pg.41]

Through a univarient search technique (Wilde, 1964) and simple iterations, the maximum of equation (7.10) can be obtained. Therefore, the optimization problem can always be reduced to an optimal control problem of the form ... [Pg.467]

As noted in the introduction, energy-only methods are generally much less efficient than gradient-based techniques. The simplex method [9] (not identical with the similarly named method used in linear programming) was used quite widely before the introduction of analytical energy gradients. The intuitively most obvious method is a sequential optimization of the variables (sequential univariate search). As the optimization of one variable affects the minimum of the others, the whole cycle has to be repeated after all variables have been optimized. A one-dimensional minimization is usually carried out by finding the... [Pg.2333]

In each step of the univariate search method, p — 1 parameters of the previous estimate vector remain fixed, while a minimum of the objective function is searched for by varying only the remaining parameter over a predetermined range. Starting from the initial parameter vector bo, in p steps all parameters are optimized in a fixed order, after which this process is repeated until each additional decrease of the objective function becomes negligible. [Pg.287]

The advantage of the GA variable selection approach over the univariate approach discussed earlier is that it is a true search for an optimal multivariate regression solution. One disadvantage of the GA method is that one must enter several parameters before it... [Pg.315]

Optimizing a univariate function is rarely seen in pharmacokinetics. Multivariate optimization is more the norm. For example, in pharmacokinetics one often wishes to identify many different rate constants and volume terms. One solution to a multivariate problem can be done either directly using direct search (Khora-sheh, Ahmadi, and Gerayeli, 1999) or random search algorithms (Schrack and Borowski, 1972), both of which are basically brute force algorithms that repeatedly evaluate the function at selected values under the... [Pg.97]

The line search procedure in step 3 of Algorithm [Al] is an approximate univariate minimization problem. It is typically performed via quadratic or cubic polynomial interpolation of the one-dimensional function X 0(X) = f k + Xpt). For the polynomial interpolant of ensure that the minimum of is located within the feasible region [xt, x -l- Xpt]. Typically, the initial trial value for X is one. Thus, the line search can be considered as a backtracking algorithm for X in the interval (0, 1]. [Pg.1147]


See other pages where Optimization univariate search is mentioned: [Pg.2333]    [Pg.39]    [Pg.39]    [Pg.284]    [Pg.55]    [Pg.32]    [Pg.259]    [Pg.200]    [Pg.1138]    [Pg.624]    [Pg.624]    [Pg.315]    [Pg.67]    [Pg.136]    [Pg.2543]   
See also in sourсe #XX -- [ Pg.39 ]




SEARCH



Univariant

Univariate optimization

Univariate search

© 2024 chempedia.info