Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Unconstrained problem function

The calculations begin with given values for the independent variables u and exit with the (constrained) derivatives of the objective function with respec t to them. Use the routine described above for the unconstrained problem where a succession of quadratic fits is used to move toward the optimal point for an unconstrained problem. This approach is a form or the generahzed reduced gradient (GRG) approach to optimizing, one of the better ways to cany out optimization numerically. [Pg.486]

Instead of a formal development of conditions that define a local optimum, we present a more intuitive kinematic illustration. Consider the contour plot of the objective function fix), given in Fig. 3-54, as a smooth valley in space of the variables X and x2. For the contour plot of this unconstrained problem Min/(x), consider a ball rolling in this valley to the lowest point offix), denoted by x. This point is at least a local minimum and is defined by a point with a zero gradient and at least nonnegative curvature in all (nonzero) directions p. We use the first-derivative (gradient) vector Vf(x) and second-derivative (Hessian) matrix V /(x) to state the necessary first- and second-order conditions for unconstrained optimality ... [Pg.61]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

Transformation of a constrained problem to an unconstrained equivalent problem. The contours of the unconstrained penalty function are shown for different values of r. [Pg.287]

Find the optimum of P with respect to xx and x2 (an unconstrained problem), noting that jc and x are functions of r. [Pg.331]

This weighted sum of absolute values in e(x) was also discussed in Section 8.4 as a way of measuring constraint violations in an exact penalty function. We proceed as we did in that section, eliminating the nonsmooth absolute value function by introducing positive and negative deviation variables dpt and dnt and converting this nonsmooth unconstrained problem into an equivalent smooth constrained problem, which is... [Pg.384]

Because software to find local solutions of NLP problems has become so efficient and widely available, multistart methods, which attempt to find a global optimum by starting the search from many starting points, have also become more effective. As discussed briefly in Section 8.10, using different starting points is a common and easy way to explore the possibility of local optima. This section considers multistart methods for unconstrained problems without discrete variables that use randomly chosen starting points, as described in Rinnooy Kan and Timmer (1987, 1989) and more recently in Locatelli and Schoen (1999). We consider only unconstrained problems, but constraints can be incorporated by including them in a penalty function (see Section 8.4). [Pg.388]

Step 2 Formulation of the unconstrained problem. Applying previous results, the (y — x) vector of the objective function is modified as follows ... [Pg.98]

We have considered the following energy-optimal control problem. The system (35) with unconstrained control function u t) is to be steered from a CA... [Pg.502]

A key idea in developing necessary and sufficient optimality conditions for nonlinear constrained optimization problems is to transform them into unconstrained problems and apply the optimality conditions discussed in Section 3.1 for the determination of the stationary points of the unconstrained function. One such transformation involves the introduction of an auxiliary function, called the Lagrange function L(x,A, p), defined as... [Pg.51]

The transformed unconstrained problem then becomes to find the stationary points of the Lagrange function... [Pg.51]

The basic idea is to approximate this problem with an unconstrained problem by adding a penalty function to the objective function that prescribes a high cost for violation of the constraint set S. This new unconstrained auxiliary problem is of the form... [Pg.2560]

Some well-known stochastic methods for solving SOO problems are simulated annealing (SA), GAs,DE and particle swarm optimization (PSO). These were initially proposed and developed for optimization problems with bounds only [that is, unconstrained problems without Equations (4.7) and (4.8)]. Subsequently, they were extended to constrained problems by incorporating a strategy for handling constraints. One relatively simple and popular sdategy is the penalty function, which involves modifying the objective function (Equation 4.5) by the addition (in the case of minimization) of a term which depends on constraint violation. Eor example, see Equation (4.9),... [Pg.109]

When one optimal solution exists in the feasible region, the objective function is uni-modal. When two optimal solutions exist, it is bimodal if more than two, it is multimodal. LP problems are imimodal unless the constraints are inconsistent, such that no feasible region exists. The solutions in Figures 18.1 to 18.3 are unimodal. A two-dimensional, multimodal case is shown in Figure 18.4, taken from Reklaitis et al. (1983) and called the Himmelblau problem. This is an unconstrained problem with the objective function ... [Pg.621]

Verify that by minimizing the unconstrained penalty function i, also the minimum of the Powell s problem (12.20) and (12.21), is achieved. [Pg.428]

It is known that the gradient direction in an unconstrained problem is the one along which the objective function has the fastest increase. When constraints are present, it is necessary to account for them and also that the direction remain feasible. The first method proposed by Rosen with this methodology used the gradient of the objective function as the direction for the projection on the active constraints (Bazaraa et al., 2006). [Pg.459]

However, for nonlinear problems (e.g. g is a nonlinear function), the optimal solution may be unconstrained in the remaining variables, and such problems are the focus of this paper. The reason is that it is for the remaining unconstrained degrees of freedom (which we henceforth call u) that the selection of controlled variables is an issue. For simplicitly, let us write the remaining unconstrained problem in reduced space in the form... [Pg.488]

Both experimental and theoretical methods exist for the prediction of protein structures. In both cases, additional restraints on the molecular system can be derived and used to formulate a nonconvex optimization problem. Here, the traditional unconstrained problem was recast as a constrained global optimization problem and was applied to protein structure prediction using NMR data. Both the formulation and solution approach of this method differ from traditional techniques, which generally rely on the optimization of penalty-type target function using SA/MD protocols. [Pg.359]

In constrained optimization, the task of finding the optimum point is divided into two parts. The constrained optimization problem is converted to an equivalent unconstrained problem followed by a constrained search. The methods of converting constrained problems to unconstrained ones are beyond the scope of this writing, but a description of them can be found in Pierre and Lxjwe (1975). The objective of an unconstrained search is to find the parameter values that minimize (or maximize) the objective function. In the case of the relaxation modulus, the objective is to find the relaxation times and relaxation strengths that minimize the error between the approximating function and the experimental data. [Pg.129]

The idea behind this method is to employ the dual problem (A.8) to find an approximate solution of the primal problem (A.l). The advantage of the dual is that the definition of the dual function is an unconstrained problem, and, at the same time, the constraint itself for the dual problem is much simpler, in particular linear (p > 0). The dual function (j) (program (A.7)) is found for given values of vectors X and p using an unconstrained optimization method. Notice that these two vectors can be updated using a line search method at each iteration bearing in mind that... [Pg.263]


See other pages where Unconstrained problem function is mentioned: [Pg.27]    [Pg.66]    [Pg.66]    [Pg.68]    [Pg.69]    [Pg.47]    [Pg.48]    [Pg.51]    [Pg.27]    [Pg.43]    [Pg.616]    [Pg.616]    [Pg.2443]    [Pg.2348]    [Pg.628]    [Pg.628]    [Pg.2560]    [Pg.2560]    [Pg.2561]    [Pg.342]    [Pg.150]    [Pg.98]    [Pg.338]    [Pg.1143]   
See also in sourсe #XX -- [ Pg.391 ]




SEARCH



Function unconstrained

Unconstrained

Unconstrained problems

© 2024 chempedia.info