Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonderivative methods

Minimization methods that incorporate only function values generally involve some systematic method to search the conformational space. In coordinate descent methods, the search directions are the standard basis vectors. A sweep through these n search vectors produces a sequential modification of one function variable at a time. Through repeated sweeping of the n-dimensional space, a local minimum might ultimately be found. Unfortunately, this strategy is inefficient and not reliable.3 4 [Pg.29]

Nonderivative minimization methods are generally easy to implement and avoid derivative computations, but their realized convergence properties are rather poor. They may work well in special cases when the function is quite random in character or the variables are essentially uncorrelated. In general, the computational cost, dominated by the number of function evaluations, can be excessively high for functions of many variables and can far outweigh the benefit of avoiding derivative calculations. [Pg.29]

If obtaining the analytic derivatives is out of the question, viable alternatives remain. The gradient can be approximated by finite differences of function values, such as [Pg.29]

Despite these drawbacks of nonderivative methods, their ease of application has made them the choice for several potential energy applications.27-62 [Pg.29]


There are two basic types of unconstrained optimization algorithms (I) those reqmring function derivatives and (2) those that do not. The nonderivative methods are of interest in optimization applications because these methods can be readily adapted to the case in which experiments are carried out directly on the process. In such cases, an ac tual process measurement (such as yield) can be the objec tive function, and no mathematical model for the process is required. Methods that do not reqmre derivatives are called direc t methods and include sequential simplex (Nelder-Meade) and Powell s method. The sequential simplex method is quite satisfac tory for optimization with two or three independent variables, is simple to understand, and is fairly easy to execute. Powell s method is more efficient than the simplex method and is based on the concept of conjugate search directions. [Pg.744]

The optimization can be carried out by several methods of linear and nonlinear regression. The mathematical methods must be chosen with criteria to fit the calculation of the applied objective functions. The most widely applied methods of nonlinear regression can be separated into two categories methods with or without using partial derivatives of the objective function to the model parameters. The most widely employed nonderivative methods are zero order, such as the methods of direct search and the Simplex (Himmelblau, 1972). The most widely used derivative methods are first order, such as the method of indirect search, Gauss-Seidel or Newton, gradient method, and the Marquardt method. [Pg.212]

Nonderivative methods compare the value of the function with changes in the decision variable. Methods include polynomial approximation, Fibonnaci search, and golden section. [Pg.1345]

Nonderivative methods include random search, grid search, simplex search, and conjugate directions (or Powell s method). The nonderivative methods use various patterns for generating new test points for decision variables, and then a comparison of the new objective function value against previous values. A subsequent test point is then generated, either based on the immediate comparison or using the previous history of test points. [Pg.1345]

A couple of precautions are in order. Systems of nonlinear equations may have many solutions. Depending on the initial guess, x°, the Newton-Raphson method may converge to different solutions. In that case, it is wise to make the best initial guesses possible and use physical reasoning in interpreting the solution. Also, the Jacobian matrix may become singular as the solution is approached. If this occurs, solution by the Newton-Raphson technique may be impossible, and other, nonderivative methods should be used. [Pg.84]

After a brief discussion on the structure of descent methods, we highlight general concepts in the three categories of local methods for unconstrained nonlinear functions nonderivative, first derivative (or gradient), and second derivative methods. [Pg.20]

In this volume, several key concepts used by practicing computational chemists are brought into focus. The first chapter by Tamar Schlick is dedicated to the mathematics of optimization. After some mathematical preliminaries, approaches to large-scale optimization are described. Basic decent structure of local methods is highlighted, and then nonderivative, gradient, and Newton methods are explained. [Pg.279]

For multivariable, nonlinear, continuous decision variable problems, the choice of nonderivative or derivative methods (to determine the search direction) depends on the availability of derivatives of the objective function with respect to the decision variables. [Pg.1345]

Nonderivative search methods for multivariable problems, with simple bounds and testing of points to ensure that they are feasible. [Pg.1347]


See other pages where Nonderivative methods is mentioned: [Pg.183]    [Pg.55]    [Pg.29]    [Pg.29]    [Pg.1346]    [Pg.374]    [Pg.183]    [Pg.55]    [Pg.29]    [Pg.29]    [Pg.1346]    [Pg.374]   
See also in sourсe #XX -- [ Pg.29 ]




SEARCH



© 2024 chempedia.info