Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Unconstrained minimization methods

So far we have considered the more usual orthogonal orbital type wave-functions in which the constraints are those of orbital orthogonality. However, for wavefunctions in which orbital orthogonality is not required (or for more general wavefunctions) the above discussion need not apply since in these cases it is possible to use an unconstrained minimization method directly on the functional... [Pg.53]

Residual minimization method (RMM-DIIS). Wood and Zunger [27] proposed lo minimize the norm of the residual vector instead of the Rayleigh quotient. This is an unconstrained minimization condition. Each minimization step starts with the evaluation of the preconditioned residual vector K for the approximate eigenstate... [Pg.72]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

Sargent, R.W.H., and D.J. Sebastian, "Numerical Experience with Algorithms for Unconstrained Minimization" in F.A. Lootsma (Ed)., "Numerical Methods for Nonlinear Optimization", Academic Press 1972, pp45-68. [Pg.53]

Parameters in a Class of Quasi-Newton Methods for Unconstrained Minimization", J.Inst.Maths.Applies. 1978 21,285-291. [Pg.54]

Newton s method can be applied to the unconstrained minimization problem as well to minimize g(x), take /(x) = g (x) and use Newton s method to search for a zero of /(x). Each step of the method can be interpreted as constructing a quadratic approximation of g(x) and stepping directly to the minimum of this approximation. [Pg.2531]

Fiacco, A. V, and McCormick, G. P. (1964), The Sequential Unconstrained Minimization Technique for Nonlinear Programming, A Primal-Dual Method, Management Science, Vol. 10, pp. 360-366. [Pg.2565]

This chapter deals with the problem of finding the unconstrained minimum of a function P x) that involves the variables x e R with Wv 1. Section 3.4 showed that conjugate direction methods are useful in solving large-scale unconstrained minimization problems. The version that uses the Pollack-Ribiere and Fletcher-Reeves methods sequentially is often particularly effective. [Pg.153]

Once we select a merit function, it can be minimized using the unconstrained optimization methods (see Chapter 3). The gradient method is just one of the methods available. The function changes more rapidly in the direction of the gradient. With reference to the merit function (7.12), the gradient in x is given by... [Pg.244]

Newton and Leibnitz. The foundations of calculus of variations were laid by Bernoulli, Euler, Lagrange and Weierstrass. The optimization of constrained problems, which involves the addition of unknown multipliers, became known by the name of its inventor Lagrange. Cauchy made the first application of the steepest descent method to solve unconstrained minimization problems. In spite of these early contributions, very little progress was made until the middle of the 20th century, when high-speed digital computers made the implementation of the optimization procedures possible and stimulated further research in new methods. [Pg.425]

Nonlinear optimization is one of the crucial topics in the numerical treatment of chemical engineering problems. Numerical optimization deals with the problems of solving systems of nonlinear equations or minimizing nonlinear functionals (with respect to side conditions). In this article we present a new method for unconstrained minimization which is suitable as well in large scale as in bad conditioned problems. The method is based on a true multi-dimensional modeling of the objective function in each iteration step. The scheme allows the incorporation of more given or known information into the search than in common line search methods. [Pg.183]

This article is structured as follows we first introduce the problem of unconstrained minimization and an abstract algorithmic description for its solution strategy. Then we give a short review of the classical approaches and briefly discuss their properties, in particular their shortcomings. The third part introduces the subspace search method. We discuss the underlying mathematics and devise an algorithmic representation. Finally, we report on the numerical performance of the outlined algorithms and comment on the solution of systems of nonlinear equations. [Pg.183]

Dennis, J.E., Schnabel, R.B. Numerical Methods for Unconstrained Minimization and Nonlinear Equations, Prentice Hall, 1983. [Pg.189]

IMSL Lib. IMSL, Inc., Sugar Land, TX http //www.vni.com/adt.dir/imslinfo.html Many routines for constrained and unconstrained minimization (nonsmooth, no derivatives, quadratic and linear programming, least-squares, nonlinear, etc.), including a nonlinear CG method of Powell (modified PR version with restarts)... [Pg.1153]

For the optimization of the energy (10.7.69), we may in principle apply any scheme developed for the unconstrained minimization of multivariate functions - for example, some globally convergent modification of the Newton method or some quasi-Newton scheme. Expanding the energy to second order by analogy with (10.1.21), we obtain... [Pg.473]

The condition number of the Hessian matrix of the objective function is an important measure of difficulty in unconstrained optimization. By definition, the smallest a condition number can be is 1.0. A condition number of 105 is moderately large, 109 is large, and 1014 is extremely large. Recall that, if Newton s method is used to minimize a function/, the Newton search direction s is found by solving the linear equations... [Pg.287]

Figure 3.13 Constrained minimization the minimum of a function f x) submitted to the constraint g(x)=0 occurs at M on the constraint subspace, here on the curve (x)=0 where Vf(x)+XVg(x)=0. P is the unconstrained minimum of/( ). This principle is the base for the method of Lagrange multipliers. Figure 3.13 Constrained minimization the minimum of a function f x) submitted to the constraint g(x)=0 occurs at M on the constraint subspace, here on the curve (x)=0 where Vf(x)+XVg(x)=0. P is the unconstrained minimum of/( ). This principle is the base for the method of Lagrange multipliers.
The scheme we employ uses a Cartesian laboratory system of coordinates which avoids the spurious small kinetic and Coriolis energy terms that arise when center of mass coordinates are used. However, the overall translational and rotational degrees of freedom are still present. The unconstrained coupled dynamics of all participating electrons and atomic nuclei is considered explicitly. The particles move under the influence of the instantaneous forces derived from the Coulombic potentials of the system Hamiltonian and the time-dependent system wave function. The time-dependent variational principle is used to derive the dynamical equations for a given form of time-dependent system wave function. The choice of wave function ansatz and of sets of atomic basis functions are the limiting approximations of the method. Wave function parameters, such as molecular orbital coefficients, z,(f), average nuclear positions and momenta, and Pfe(0, etc., carry the time dependence and serve as the dynamical variables of the method. Therefore, the parameterization of the system wave function is important, and we have found that wave functions expressed as generalized coherent states are particularly useful. A minimal implementation of the method [16,17] employs a wave function of the form ... [Pg.49]

Rather than minimize the energy function o(P) = (P, H) by varying over the set of fe-matrices, there is a dual formulation where the bottom eigenvalue /lo(H + S) of the matrix H + S is maximized over the set of Pauli matrices S e The dual formulation can be derived using Lagrange s method, which requires converting the constrained energy problem to an unconstrained one. If... [Pg.72]

Another technique for handling the mass balances was introduced by Castillo and Grossmann (1). Rather than convert the objective function into an unconstrained form, they implemented the Variable Metric Projection method of Sargent and Murtagh (32) to minimize Gibbs s free energy. This is a quasi-Newton method which uses a rank-one update to the approximation of H l, with the search direction "projected" onto the intersection of hyperplanes defined by linear mass balances. [Pg.129]


See other pages where Unconstrained minimization methods is mentioned: [Pg.184]    [Pg.305]    [Pg.93]    [Pg.202]    [Pg.121]    [Pg.184]    [Pg.2560]    [Pg.2757]    [Pg.258]    [Pg.200]    [Pg.815]    [Pg.1926]    [Pg.252]    [Pg.78]    [Pg.160]    [Pg.275]    [Pg.68]    [Pg.231]    [Pg.7]    [Pg.47]    [Pg.48]    [Pg.2]    [Pg.485]    [Pg.363]    [Pg.196]    [Pg.283]    [Pg.231]   
See also in sourсe #XX -- [ Pg.244 , Pg.245 ]




SEARCH



Unconstrained

© 2024 chempedia.info