Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization local

The basic self-consistent field (SCF) procedure, i.e., repeated diagonalization of the Fock matrix [26], can be viewed, if sufficiently converged, as local optimization with a fixed, approximate Hessian, i.e., as simple relaxation. To show this, let us consider the closed-shell case and restrict ourselves to real orbitals. The SCF orbital coefficients are not the... [Pg.2339]

We tested our new potential by applying a local optimization procedure to the potential of some proteins, starting with the native structure as given in the Brookhaven Protein Data Bank, and observing how far the coordinates moved through local optimization. For a good potential, one expects the optimizer to be close to the native structure. As in Ulrich et al. [34], we measure the distance between optimizer B and native structure A by the distance matrix error... [Pg.221]

The globally optimal laser field for this example is presented in Fig. 2. The field is relatively simple with structure at early times, followed by a large peak with a nearly Gaussian profile. Note that the control formalism enforces no specific structure on the field a priori. That is, the form of the field is totally unconstrained during the allotted time interval, so simple solutions are not guaranteed. Also shown in Fig. 2 is the locally optimal... [Pg.254]

Figure 2. Optimal laser fields for the control scenario in Fig. 1. The solid line is the globally optimal laser field. The dashed line is the locally optimal Gaussian field. Figure 2. Optimal laser fields for the control scenario in Fig. 1. The solid line is the globally optimal laser field. The dashed line is the locally optimal Gaussian field.
The back-propagation strategy is a steepest gradient method, a local optimization technique. Therefore, it also suffers from the major drawback of these methods, namely that it can become locked in a local optimum. Many variants have been developed to overcome this drawback [20-24]. None of these does however really solve the problem. [Pg.677]

The description given by Zimmer seems to indicate a grid search based on local optimization rather than dynamic programming. [Pg.172]

The optimal number of time points used in this example is 8. The objective function value of the MILP, from the first step, was 769.3 t, which is not the same as the exact model. This means that the solution found is only locally optimal. [Pg.141]

We start with continuous variable optimization and consider in the next section the solution of NLP problems with differentiable objective and constraint functions. If only local solutions are required for the NLP problem, then very efficient large-scale methods can be considered. This is followed by methods that are not based on local optimality criteria we consider direct search optimization methods that do not require derivatives as well as deterministic global optimization methods. Following this, we consider the solution of mixed integer problems and outline the main characteristics of algorithms for their solution. Finally, we conclude with a discussion of optimization modeling software and its implementation on engineering models. [Pg.60]

These necessary conditions for local optimality can be strengthened to sufficient conditions by making the inequality in (3-87) strict (i.e., positive curvature in all directions). Equivalently, the sufficient (necessary) curvature conditions can be stated as follows V /(x ) has all positive (nonnegative) eigenvalues and is therefore defined as a positive (semidefinite) definite matrix. [Pg.61]

Convex Cases of NLP Problems Linear programs and quadratic programs are special cases of (3-85) that allow for more efficient solution, based on application of KKT conditions (3-88) through (3-91). Because these are convex problems, any locally optimal solution is a global solution. In particular, if the objective and constraint functions in (3-85) are linear, then the following linear program (LP)... [Pg.62]

NLP methods provide first and second derivatives. The KKT conditions require first derivatives to define stationary points, so accurate first derivatives are essential to determine locally optimal solutions for differentiable NLPs. Moreover, Newton-Raphson methods that are applied to the KKT conditions, as well as the task of checking second-order KKT conditions, necessarily require second-... [Pg.64]

This basic concept leads to a wide variety of global algorithms, with the following features that can exploit different problem classes. Bounding strategies relate to the calculation of upper and lower bounds. For the former, any feasible point or, preferably, a locally optimal point in the subregion can be used. For the lower bound, convex relaxations of the objective and constraint functions are derived. [Pg.66]

By partitioning one can try to split the problem into a number of smaller problems that may be easier to solve and then recombine the local optimal solutions into a global solution. [Pg.275]

However, the local optimization in procurement can lead to goal conflicts with other areas in the value chain long-term purchasing contracts with high volumes to reach minimum prices can reduce the flexibility in... [Pg.44]

An important extension to rigid-body fitting is the so-called directed tweak technique [105]. Directed tweak allows for an RMS fit, simultaneously considering the molecular flexibility. By the use of local coordinates for the handling of rotatable bonds, it is possible to formulate analytical derivatives of the objective function. With a gradient-based local optimizer flexible RMS fits are obtained extremely fast. However, no torsional preferences may be introduced. Therefore, directed tweak may result in energetically unfavorable conformations. [Pg.71]

Dealing with Z BZ directly has several advantages if n — m is small. Here the matrix is dense and the sufficient conditions for local optimality require that Z BZ be positive definite. Hence, the quasi-Newton update formula can be applied directly to this matrix. Several variations of this basic algorithm... [Pg.204]


See other pages where Optimization local is mentioned: [Pg.2332]    [Pg.2332]    [Pg.2354]    [Pg.2355]    [Pg.214]    [Pg.548]    [Pg.64]    [Pg.64]    [Pg.76]    [Pg.80]    [Pg.83]    [Pg.89]    [Pg.255]    [Pg.255]    [Pg.72]    [Pg.111]    [Pg.374]    [Pg.371]    [Pg.134]    [Pg.141]    [Pg.61]    [Pg.61]    [Pg.66]    [Pg.66]    [Pg.67]    [Pg.9]    [Pg.388]    [Pg.236]    [Pg.365]    [Pg.249]    [Pg.169]    [Pg.172]    [Pg.181]    [Pg.124]    [Pg.159]   
See also in sourсe #XX -- [ Pg.37 , Pg.45 ]

See also in sourсe #XX -- [ Pg.449 ]

See also in sourсe #XX -- [ Pg.679 ]




SEARCH



© 2024 chempedia.info