Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Minimization, line search

An accurate line search will require several function evaluations along each search direction. Often the minimization along the line is only carried out fairly crudely, or a... [Pg.317]

Another class of methods of unidimensional minimization locates a point x near x, the value of the independent variable corresponding to the minimum of /(x), by extrapolation and interpolation using polynomial approximations as models of/(x). Both quadratic and cubic approximation have been proposed using function values only and using both function and derivative values. In functions where/ (x) is continuous, these methods are much more efficient than other methods and are now widely used to do line searches within multivariable optimizers. [Pg.166]

In minimizing a function /(x) of several variables, the general procedure is to (a) calculate a search direction and (b) reduce the value of/(x) by taking one or more steps in that search direction. Chapter 6 describes in detail how to select search directions. Here we explain how to take steps in the search direction as a function of a single variable, the step length a. The process of choosing a is called a unidimensional search or line search. [Pg.173]

In an exact line search, we choose ak as the a that minimizes gk (a), so... [Pg.193]

In doing the line search we can minimize a quadratic approximation in a given search direction. This means that to compute the value for a for the relation x ""1 = x + ask we must minimize... [Pg.195]

Difficulty 3 can be ameliorated by using (properly) finite difference approximation as substitutes for derivatives. To overcome difficulty 4, two classes of methods exist to modify the pure Newton s method so that it is guaranteed to converge to a local minimum from an arbitrary starting point. The first of these, called trust region methods, minimize the quadratic approximation, Equation (6.10), within an elliptical region, whose size is adjusted so that the objective improves at each iteration see Section 6.3.2. The second class, line search methods, modifies the pure Newton s method in two ways (1) instead of taking a step size of one, a line search is used and (2) if the Hessian matrix H(x ) is not positive-definite, it is replaced by a positive-definite matrix that is close to H(x ). This is motivated by the easily verified fact that, if H(x ) is positive-definite, the Newton direction... [Pg.202]

If the BFGS algorithm is applied to a positive-definite quadratic function of n variables and the line search is exact, it will minimize the function in at most n iterations (Dennis and Schnabel, 1996, Chapter 9). This is also true for some other updating formulas. For nonquadratic functions, a good BFGS code usually requires more iterations than a comparable Newton implementation and may not be as accurate. Each BFGS iteration is generally faster, however, because second derivatives are not required and the system of linear equations (6.15) need not be solved. [Pg.208]

This backtracking line search tries a = 1.0 first and accepts it if the sufficient decrease criterion (8.78) is met. This criterion is also used in unconstrained minimization, as discussed in Section 6.3.2. If a = 1.0 fails the test (8.78), a safe-... [Pg.304]

Therefore, the shelf life is the root smaller than 28.90. A simple and practical tool to compute the roots of Equation (12) is perhaps solving the following equivalent problem. Find such that it minimizes the absolute value of /( ). This root is obtained by using the quasi-Newton line search (QNLS) algorithm [13]. The computer program requires an initial point and we recommend using the value... [Pg.603]

There Eire other Hessian updates but for minimizations the BFGS update is the most successful. Hessism update techniques are usually combined with line search vide infra) and the resulting minimization algorithms are called quasi-Newton methods. In saddle point optimizations we must allow the approximate Hessian to become indefinite and the PSB update is therefore more appropriate. [Pg.309]

Quadratic convergence means that eventually the number of correct figures in Xc doubles at each step, clearly a desirable property. Close to x Newton s method Eq. (3.9) shows quadratic convergence while quasi-Newton methods Eq. (3.8) show superlinear convergence. The RF step Eq. (3.20) converges quadratically when the exact Hessian is used. Steepest descent with exact line search converges linearly for minimization. [Pg.310]

Global strategies for minimization are needed whenever the current estimate of the minimizer is so far from x that the local model is not a good approximation to fix) in the neighborhood of x. Three methods are considered in this section the quadratic model with line search, trust region (restricted second-order) minimization and rational function (augmented Hessian) minimization. [Pg.311]

In the global region the Newton or quasi-Newton step may not be satisfactory. It may, for example, increase rather than decrease the function to be minimized. Although the step must then be rejected we may still use it to provide a direction for a one-dimensional minimization of the function. We then carry out a search along the Newton step until an acceptable reduction in the function is obtained and the result of this line search becomes our next step. [Pg.311]

Line searches are often used in connection with Hessian update formulas and provide a relatively stable and efficient method for minimizations. However, line searches are not always successful. For example, if the Hessian is indefinite there is no natural way to choose the descent direction. We may then have to revert to steepest descent although this step makes no use of the information provided by the Hessian. It may also be... [Pg.312]

To minimize the image function we use a second-order method since in each iteration the Hessian is needed anyway to identify the mode to be inverted (the image mode). Line search methods cannot be used since it is impossible to calculate the image function itself when carrying out the line search. However, the trust region RSO minimization requires only gradient and Hessian information and may therefore be used. In the diagonal representation the step Eq. (5.8) becomes... [Pg.321]

For such applications of classical optimization theory, the data on energy and gradients are so computationally expensive that only the most efficient optimization methods can be considered, no matter how elaborate. The number of quantum chemical wave function calculations must absolutely be minimized for overall efficiency. The computational cost of an update algorithm is always negligible in this context. Data from successive iterative steps should be saved, then used to reduce the total number of steps. Any algorithm dependent on line searches in the parameter hyperspace should be avoided. [Pg.30]

The fundamental structure of local iterative techniques for solving unconstrained minimization problems is simple. A starting point is chosen, a direction of movement is prescribed according to some algorithm, and a line search... [Pg.20]

A line search consists of an approximate one-dimensional minimization of the objective function along the computed direction p. This produces an acceptable step X and a new iterate xk + Xp. Function and gradient evaluations of the objective function are required in each line search iteration. In contrast, the trust region strategy minimizes approximately a local quadratic model of the function using current Hessian information. An optimal step that lies within... [Pg.21]

In general, there has been no clear evidence for superiority of one approach over another. Indeed, both techniques are incorporated into minimization software with approximately equal success. Later we sketch the line search procedure, which we find easier to grasp and simpler to implement in practice. For further perspective and algorithmic details, the reader is referred to the recent review of Dennis and Schnabel and references cited therein.54... [Pg.22]

The line search is essentially an approximate one-dimensional minimization problem. It is usually performed by safeguarded polynomial interpola-tion.5 6>S4 56 That is, in a typical line step iteration, cubic interpolation is performed in a region of X that ensures that the minimum of /along p has been bracketed. The minimum of that polynomial then provides a new candidate for X. If the search directions are properly scaled, the initial trial point Xt = 1 produces a first reasonable trial move from xk. A simple illustration of such a first line search step is shown in Figure 9. The minimized one-dimensional function at the current point xk is defined by /(X) = f(xk + Xp ). The vectors corresponding to different values of X are set by x(X) = xk + Xp. ... [Pg.22]

The idea in these tests is to ensure sufficient decrease in the objective function for the new point relative to the step taken. These conditions balance the work in the line search procedure with the overall progress realized in the minimization method. The conditions often used in optimization algorithms are derived from the Armijo and Goldstein criterion.6 They require that the inequalities... [Pg.25]

Difficulties of this sort can be avoided by minimizing S 6) of Eq. (6.3-7) over a trust region of 6 in each iteration. A line search can be used to adjust the correction vector A6 and the trust-region dimensions for the next iteration. Such a method is outlined below and used in GREGPLUS. [Pg.102]

The strategy described in Section 6.4 for constrained minimization of the sum of squares S 9) is readily adapted to the multiresponse objective function S 9) of Eq. (7.2-16). The quadratic programming subroutine GRQP and the line search subroutine GRS2 are used, with the following changes ... [Pg.152]

The iteration process (5.14) together with the linear line search described by formula (5.21) gives us a numerical scheme for the steepest descent method for misfit functional minimization. Thus, this algorithm for the steepest descent method can be summarized as follows ... [Pg.130]

Solution of this minimization problem gives the following best estimation for the length of the step using a linear line search ... [Pg.148]


See other pages where Minimization, line search is mentioned: [Pg.314]    [Pg.95]    [Pg.95]    [Pg.314]    [Pg.95]    [Pg.95]    [Pg.154]    [Pg.317]    [Pg.318]    [Pg.64]    [Pg.182]    [Pg.210]    [Pg.291]    [Pg.304]    [Pg.308]    [Pg.311]    [Pg.202]    [Pg.44]    [Pg.43]    [Pg.312]    [Pg.68]    [Pg.21]    [Pg.64]    [Pg.167]    [Pg.133]   
See also in sourсe #XX -- [ Pg.95 ]

See also in sourсe #XX -- [ Pg.95 ]




SEARCH



Minimal line

© 2024 chempedia.info