Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Unconstrained optimization examples

A brief overview of this relatively vast subject is presented and several of these methods are briefly discussed in the following sections. Over the years many comparisons of the performance of many methods have been carried out and reported in the literature. For example, Box (1966) evaluated eight unconstrained optimization methods using a set of problems with up to twenty variables. [Pg.67]

Within the realm of physical reality, and most important in pharmaceutical systems, the unconstrained optimization problem is almost nonexistent. There are always restrictions that the formulator wishes to place or must place on a system, and in pharmaceuticals, many of these restrictions are in competition. For example, it is unreasonable to assume, as just described, that the hardest tablet possible would also have the lowest compression and ejection forces and the fastest disintegration time and dissolution profile. It is sometimes necessary to trade off properties, that is, to sacrifice one characteristic for another. Thus, the primary objective may not be to optimize absolutely (i.e., a maxima or minima), but to realize an overall pre selected or desired result for each characteristic or parameter. Drug products are often developed by teaching an effective compromise between competing characteristics to achieve the best formulation and process within a given set of restrictions. [Pg.608]

In problems in which there are n variables and m equality constraints, we could attempt to eliminate m variables by direct substitution. If all equality constraints can be removed, and there are no inequality constraints, the objective function can then be differentiated with respect to each of the remaining (n — m) variables and the derivatives set equal to zero. Alternatively, a computer code for unconstrained optimization can be employed to obtain x. If the objective function is convex (as in the preceding example) and the constraints form a convex region, then any stationary point is a global minimum. Unfortunately, very few problems in practice assume this simple form or even permit the elimination of all equality constraints. [Pg.266]

The use of quantum-chemistry computer codes for the determination of the equilibrium geometries of molecules is now almost routine owing to the availability of analytical gradients at SCF, MC-SCF and CP levels of theory and to the robust methods available from the held of numerical analysis for the unconstrained optimization of multi-variable functions (see, for example. Ref. 21). In general, one assumes a quadratic Taylor series expansion of the energy about the current position... [Pg.161]

This is a typical minimization problem for a function of n variables that can be solved using a Mathcad built-in function MINIMIZE. The latter implements gradient search algorithms to find the local minimum. The SSq function in this case is called the target function, and the unknown kinetic constants are the optimization parameters. When there are no additional limitations for the values of optimization parameters or the sought function, we have a case of the so called unconstrained optimization. Likewise, if the unknown parameters or the target function itself are mathematically constrained with some equalities or inequalities, then one deals with the constrained optimization. Such additional constrains are usually set on the basis on the physical nature of the problems (e.g. rate constants must be positive, a ratio of the direct reaction rate to that of the inverse one must equal the equilibrium constant, etc.) The constraints are sometimes added in order to speed up the computations (for example, the value of target function in the found minimum should not exceed some number TOL). [Pg.133]

The globally optimal laser field for this example is presented in Fig. 2. The field is relatively simple with structure at early times, followed by a large peak with a nearly Gaussian profile. Note that the control formalism enforces no specific structure on the field a priori. That is, the form of the field is totally unconstrained during the allotted time interval, so simple solutions are not guaranteed. Also shown in Fig. 2 is the locally optimal... [Pg.254]

Although the examples thus far have involved linear constraints, the chief nonlinearity of an optimization problem often appears in the constraints. The feasible region then has curved boundaries. A problem with nonlinear constraints may have local optima, even if the objective function has only one unconstrained optimum. Consider a problem with a quadratic objective function and the feasible region shown in Figure 4.8. The problem has local optima at the two points a and b because no point of the feasible region in the immediate vicinity of either point yields a smaller value of/. [Pg.120]

In nonlinear programming problems, optimal solutions need not occur at vertices and can occur at points with positive degrees of freedom. It is possible to have no active constraints at a solution, for example in unconstrained problems. We consider nonlinear problems with constraints in Chapter 8. [Pg.229]

Clearly f will go to zero when E2 = Et, independently of the magnitude of. Note, however, that the gradient will also go to zero if Et is different from E2 but the two surfaces are parallel (i.e., Xj, the gradient difference vector, has zero length). In this case the method would fail. This situation will occur for a Renner-Teller-like degeneracy, for example. Of course, in this case, the geometry can be found by normal unconstrained geometry optimization. [Pg.112]

There are two types of unconstrained multivariable optimization techniques those requiring function derivatives and those that do not. An example of a technique that does not require function derivatives is the sequential simplex search. This technique is well suited to systems where no mathematical model currently exists because it uses process data directly. [Pg.136]

Although the open-loop optimal feedback law does not result in unwileldy computational requirements, the closed-loop optimal feedback law is considerably more complicated. For several practical problems the open-loop optimal feedback law produces results that are close to those produced by the closed-loop optimal feedback law. However, there are cases for which the open-loop optimal feedback law may be far inferior to the closed-loop optimal feedback law. Rawlings et al. (1994) present a related example on a generic staged system. Lee and Yu (1997) show that open-loop optimal feedback is, in general, inferior to closed-loop optimal feedback for nonlinear processes and linear processes with uncertain coefficients. They also develop a number of explicit closed-loop optimal feedback laws for a number of unconstrained MFC cases. [Pg.141]

Synthetic rebalancing cannot always be done, however. This is likely to be the case for rttiquid assets and those with legal covenants limiting transfer. For example, an investor may own a partnership or hold a concentrated stock position in a trust whose position cannot be swapped away. In these situations, MV optimization must be amended to include these assets with their weights restricted to the prescribed levels. The returns, risk, and correlation forecasts for the restricted assets must then be incorporated exphcitly in the analysis to take account of their interaction with other assets. The resulting constrained optimum portfolios will comprise a second-best efficient frontier but may not be too far off the unconstrained version. [Pg.763]

Some well-known stochastic methods for solving SOO problems are simulated annealing (SA), GAs,DE and particle swarm optimization (PSO). These were initially proposed and developed for optimization problems with bounds only [that is, unconstrained problems without Equations (4.7) and (4.8)]. Subsequently, they were extended to constrained problems by incorporating a strategy for handling constraints. One relatively simple and popular sdategy is the penalty function, which involves modifying the objective function (Equation 4.5) by the addition (in the case of minimization) of a term which depends on constraint violation. Eor example, see Equation (4.9),... [Pg.109]

These goals may conflict. For example, a rapidly convergent method for a large unconstrained nonlinear problem may require too much computer memory. On the other hand, a robust method may also be the slowest. Tradeoffs between convergence rate and storage requirements, and between robustness and speed, and so on, are central issues in numerical optimization. [Pg.431]

Note now an important aspect of this analysis The mixing of the critical CW s in this and the previous example is accomplished via monoelectronic CT interaction matrix elements while previously we have stated that CW s within a subsystem interact in a monoelectronic sense while different subsystems can interact only in a bielectronic sense. Thus, it appears that there is a contradiction. The reason why "normal" and sigma-pi hybridization are really identical problems at the level of unconstrained VB theory while they appear to be different at the SD MO or MOVB-IBM level, lies in the fact that the latter two theories involve constrained CW products. By contrast, pure YB theory places no constraints on the CW s. Hence, all problems are conceptually identical at the level of YB (or MOYB) theory as they all revolve about the optimal interaction of a set of CW s, in the absence of constraints, to ultimately generate what we subsequently call subsystems. [Pg.493]

A new method for selecting controlled variables (c) as linear combination of measurements (y) is proposed based on the idea of self-optimizing control. The objective is to find controlled variables, such that a constant setpoint policy leads to near optimal operation in the presence of low frequency disturbances d). We propose to combine as many measurements as there are unconstrained degrees of freedom (inputs, u) and major disturbances such that opi d) = 0. To illustrate the ideas a gas-lift allocation example is included. The example show that the method proposed here give controlled variables with good self-optimizing properties. [Pg.353]

This leaves one unconstrained degree of freedom (which we may select, for example, as u = mi, but which variable we select to represent u is not important as any of the three variables mj, m2 or m3 will do). We now want to evaluate the loss imposed by keeping alternative controlled variables c constant at their nominal optimal values, = Copt(d ). The available measurements available are a subset of Uo, namely... [Pg.497]

Figure 1 diagrams an unconstrained one-dimensional example of the approach. The mathematical proof that the aBB global optimization algorithm... [Pg.278]

For the optimization of the energy (10.7.69), we may in principle apply any scheme developed for the unconstrained minimization of multivariate functions - for example, some globally convergent modification of the Newton method or some quasi-Newton scheme. Expanding the energy to second order by analogy with (10.1.21), we obtain... [Pg.473]


See other pages where Unconstrained optimization examples is mentioned: [Pg.79]    [Pg.206]    [Pg.402]    [Pg.78]    [Pg.178]    [Pg.206]    [Pg.404]    [Pg.109]    [Pg.99]    [Pg.112]    [Pg.206]    [Pg.184]    [Pg.270]    [Pg.288]    [Pg.295]    [Pg.385]    [Pg.68]    [Pg.69]    [Pg.118]    [Pg.90]    [Pg.145]    [Pg.14]    [Pg.162]    [Pg.266]    [Pg.220]    [Pg.184]    [Pg.715]    [Pg.342]    [Pg.488]   
See also in sourсe #XX -- [ Pg.204 , Pg.209 , Pg.451 , Pg.464 ]




SEARCH



Unconstrained

© 2024 chempedia.info