Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Line search methods

The gradient at the minimum point obtained from the line search will be perpendicular to the previous direction. Thus, when the line search method is used to locate the minimum along the gradient then the next direction in the steepest descents algorithm will be orthogonal to the previous direction (i.e. gk Sk-i = 0)-... [Pg.281]

Difficulty 3 can be ameliorated by using (properly) finite difference approximation as substitutes for derivatives. To overcome difficulty 4, two classes of methods exist to modify the pure Newton s method so that it is guaranteed to converge to a local minimum from an arbitrary starting point. The first of these, called trust region methods, minimize the quadratic approximation, Equation (6.10), within an elliptical region, whose size is adjusted so that the objective improves at each iteration see Section 6.3.2. The second class, line search methods, modifies the pure Newton s method in two ways (1) instead of taking a step size of one, a line search is used and (2) if the Hessian matrix H(x ) is not positive-definite, it is replaced by a positive-definite matrix that is close to H(x ). This is motivated by the easily verified fact that, if H(x ) is positive-definite, the Newton direction... [Pg.202]

To minimize the image function we use a second-order method since in each iteration the Hessian is needed anyway to identify the mode to be inverted (the image mode). Line search methods cannot be used since it is impossible to calculate the image function itself when carrying out the line search. However, the trust region RSO minimization requires only gradient and Hessian information and may therefore be used. In the diagonal representation the step Eq. (5.8) becomes... [Pg.321]

The step based on the local approximation may have to be reduced by several orders of magnitude to satisfy the conditions for a new value to be accepted. This may be accomplished using a line search method to select a scaling factor a for the step to satisfy some criteria. If the line search is to be accomplished... [Pg.340]

Clearly Equation (5.6) can only be used from the second step onwards and so the first step in the conjugate gradients method is the same as the steepest descents (i.e. in the direction of the gradient). The line search method should ideally be used to locate the one-dimensional minimum in each direction to ensure that each gradient is orthogonal to all previous gradients and that each direction is conjugate to all previous directions. However, an arbitrary step method is also possible. [Pg.266]

Other line search methods that involve only function evaluations, that is, no derivative calculations, are the dichotomous search, the Fibonacci search (Kiefer 1957), and the quadratic fit line search. The Fibonacci search is the most efficient derivative-free line search technique in the sense that it requires the fewest function evaluations to attain a prescribed degree of accuracy. The quadratic fit method... [Pg.2548]

Nonlinear optimization is one of the crucial topics in the numerical treatment of chemical engineering problems. Numerical optimization deals with the problems of solving systems of nonlinear equations or minimizing nonlinear functionals (with respect to side conditions). In this article we present a new method for unconstrained minimization which is suitable as well in large scale as in bad conditioned problems. The method is based on a true multi-dimensional modeling of the objective function in each iteration step. The scheme allows the incorporation of more given or known information into the search than in common line search methods. [Pg.183]

How to construct an iteration scheme J A simple answer is by modeling. To explain this we take a look at the well known line search methods. These methods model the objective function / at the current iterate by a onedimensional substitute, allowing only arguments along a given line. This leads in every step to the model... [Pg.184]

Moreover in an implementation of a line search method one has to specify the descent direction in the iteration (3). The choice... [Pg.185]

How is it possible to overcome the discussed shortcomings of line search methods and to embed more information about the function into the search for the local minimum One answer are trust-region methods (or restricted step methods). They do a search in a restricted neighborhood of the current iterate and try to minimize a quadratic model of /. For example in the double-dogleg implementation, it is a restricted step search in a two-dimensional subspace, spanned by the actual gradient and the Newton step (and further reduced to a non-smooth curve search). For information on trust region methods see e.g. Dennis and Schnabel [2], pp. 129ff. [Pg.186]

By far the most popular technique for optimization in the global region—at least in connection with updated Hessians—are the line search methods. The idea behind these schemes is that although the Newton or quasi-Newton step may not be satisfactory and therefore must be discarded, it still contains useful information. In particular, we may use the step to provide a direction for a one-dimensional minimization of the function. We then carry out... [Pg.115]

The trust-region method is usually implemented with the exact Hessian. An approximate Hessian may also be used but, unless the initial Hessian has been calculated accurately and the topology of the surface has not changed much, an updated Hessian does not contain enough information about the function to make the trust region reliable in all directions. The trust-region method provides us with the possibility to carry out an unbiased search in all directions at each step. In general, an updated Hessian does not contain the information necessary for such a search. Updated Hessians are therefore more useful in connection with line search methods, as discussed above. [Pg.120]

The shift parameter can be used to ensure that the optimization proceeds downhill even if the Hessian has negative eigenvalues. In addition, it can be chosen such that the step size is lower or equal to a predefined threshold. Popular methods using a shift parameter are the rational function optimization (RFO) [48] and Trust Radius (TR) methods [49, 50]. A finer control on the step size and direction can be achieved using an approximate line search method, which attempts to fit a polynomial function to the energies and gradients of the best previous points [51]. [Pg.36]

To begin with, programs which do not comprise any constraint (h, and g) are discussed. For unconstrained optimization, the algorithms can be classified into two groups the line search methods and the tmst region methods. Both of them require an initial feasible solution (.ro) for the problem. They basically determine a direction (pit), and a distance or step length (o ) to move toward an improved solution. [Pg.261]

The idea behind this method is to employ the dual problem (A.8) to find an approximate solution of the primal problem (A.l). The advantage of the dual is that the definition of the dual function is an unconstrained problem, and, at the same time, the constraint itself for the dual problem is much simpler, in particular linear (p > 0). The dual function (j) (program (A.7)) is found for given values of vectors X and p using an unconstrained optimization method. Notice that these two vectors can be updated using a line search method at each iteration bearing in mind that... [Pg.263]


See other pages where Line search methods is mentioned: [Pg.284]    [Pg.64]    [Pg.482]    [Pg.614]    [Pg.124]    [Pg.34]    [Pg.626]    [Pg.61]    [Pg.187]    [Pg.1116]    [Pg.261]    [Pg.80]    [Pg.223]    [Pg.223]   
See also in sourсe #XX -- [ Pg.98 ]




SEARCH



Line methods

Newton line search methods

Search methods

Searching methods

© 2024 chempedia.info