Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization unconstrained

Examples of this chapter can be found in the Vol3 Chapter3 directory in the WileyVol3. z ip file available at www.chem.polimi.it/homes/gbuzzi. [Pg.79]

This chapter deals with the problem of finding the minimum of a function F(x) involving ny variables, x e R , with ny 1. For maximization problems, all that needs to be done is to invert the sign of the objective function. [Pg.79]

Only unconstrained optimization is considered in this chaplet [Pg.79]

we describe the main iterative methods used to solve multidimensional optimization problems. By the term iteration we mean that, given a point, x , where i is the progressive iteration number, the next point, xj+i, results in [Pg.79]

Therefore, an iterative optimization method requires that both the direction, pj, and the value of the step, Ui, along p be selected. [Pg.79]


Exactly the same types of step as for an unconstrained optimization can then be taken, using the modified as opposed to the regular gradient. [Pg.2347]

Fletcher R 1981 Practical Methods of Optimization Vol 1—Unconstrained Optimization (New York Wley)... [Pg.2355]

Dennis J E and Schnabel R B 1983 Numerical Methods for Unconstrained Optimization and Non-linear Equations (Englewood Cliffs, NJ Prentice-Hall)... [Pg.2355]

Powell M J D 1971 Recent advances in unconstrained optimization Math. Prog. 1 26... [Pg.2356]

HyperChem includes only unconstrained optimization. That is, given the coordinates ( X of a set of atoms. A (the inde-... [Pg.302]

Unconstrained optimization methods [W. II. Press, et. ah, Numerical Recipes The An of Scieniific Compulime.. Cambridge University Press, 1 9H6. Chapter 101 can use values of only the objective function, or of first derivatives of the objective function. second derivatives of the objective function, etc. llyperChem uses first derivative information and, in the Block Diagonal Newton-Raphson case, second derivatives for one atom at a time. TlyperChem does not use optimizers that compute the full set of second derivatives (th e Hessian ) because it is im practical to store the Hessian for mac-romoleciiles with thousands of atoms. A future release may make explicit-Hessian meth oils available for smaller molecules but at this release only methods that store the first derivative information, or the second derivatives of a single atom, are used. [Pg.303]

Local Minimum Point for Unconstrained Problems Consider the following unconstrained optimization problem ... [Pg.484]

Unconstrained Optimization Unconstrained optimization refers to the case where no inequahty constraints are present and all equahty constraints can be eliminated by solving for selected dependent variables followed by substitution for them in the objec tive func tion. Veiy few reahstic problems in process optimization are unconstrained. However, it is desirable to have efficient unconstrained optimization techniques available since these techniques must be applied in real time and iterative calculations cost computer time. The two classes of unconstrained techniques are single-variable optimization and multivariable optimization. [Pg.744]

There are two basic types of unconstrained optimization algorithms (I) those reqmring function derivatives and (2) those that do not. The nonderivative methods are of interest in optimization applications because these methods can be readily adapted to the case in which experiments are carried out directly on the process. In such cases, an ac tual process measurement (such as yield) can be the objec tive function, and no mathematical model for the process is required. Methods that do not reqmre derivatives are called direc t methods and include sequential simplex (Nelder-Meade) and Powell s method. The sequential simplex method is quite satisfac tory for optimization with two or three independent variables, is simple to understand, and is fairly easy to execute. Powell s method is more efficient than the simplex method and is based on the concept of conjugate search directions. [Pg.744]

The penalty function approach adds a tenn of tire type k r — ro) to the function to be optimized. The variable r is constrained to be near the target value ro, and the force constant k describes how important the constraint is compared with the unconstrained optimization. By making k arbitrary large, tire constraint may be fulfilled to any given... [Pg.338]

Compare the (unconstrained) optimal temperature profiles of 10-zone PFRs for the following cases where (a) the reactions are consecutive as per Equation (6.1) and endothermic (b) the reactions are consecutive and exothermic (c) the reactions are competitive as per Equation (6.6) and endothermic and (d) the reactions are competitive and exothermic. [Pg.204]

The random search technique can be applied to constrained or unconstrained optimization problems involving any number of parameters. The solution starts with an initial set of parameters that satisfies the constraints. A small random change is made in each parameter to create a new set of parameters, and the objective function is calculated. If the new set satisfies all the constraints and gives a better value for the objective function, it is accepted and becomes the starting point for another set of random changes. Otherwise, the old parameter set is retained as the starting point for the next attempt. The key to the method is the step that sets the new, trial values for the parameters ... [Pg.206]

R. Fletcher, Practical Methods of Optimization Unconstrained Optimization, John Wiley Sons, Inc., New York, 1987, vol. 1. [Pg.94]

There is a variety of general purpose unconstrained optimization methods that can be used to estimate unknown parameters. These methods are broadly classified into two categories direct search methods and gradient methods (Edgar and Himmelblau, 1988 Gill et al. 1981 Kowalik and Osborne, 1968 Sargent, 1980 Reklaitis, 1983 Scales, 1985). [Pg.67]

A brief overview of this relatively vast subject is presented and several of these methods are briefly discussed in the following sections. Over the years many comparisons of the performance of many methods have been carried out and reported in the literature. For example, Box (1966) evaluated eight unconstrained optimization methods using a set of problems with up to twenty variables. [Pg.67]

Gill, P.E. and W. Murray, "Quasi-Newton Methods for Unconstrained Optimization", J. Inst. Maths Applies, 9,91-108 (1972). [Pg.395]

Goldfard, D., "Factorized Variable Metric Methods for Unconstrained Optimization", Mathematics of Computation, 30 (136) 796-811 (1976). [Pg.395]

Kowalik, J., and M.R. Osborne, Methods of Unconstrained Optimization Problems, Elsevier, New York, NY, 1968. [Pg.397]

There are two general types of optimization problem constrained and unconstrained. Constraints are restrictions placed on the system by physical limitations or perhaps by simple practicality (e.g., economic considerations). In unconstrained optimization problems there are no restrictions. For a given pharmaceutical system one might wish to make the hardest tablet possible. The constrained problem, on the other hand, would be stated make the hardest tablet possible, but it must disintegrate in less than 15 minutes. [Pg.608]

Within the realm of physical reality, and most important in pharmaceutical systems, the unconstrained optimization problem is almost nonexistent. There are always restrictions that the formulator wishes to place or must place on a system, and in pharmaceuticals, many of these restrictions are in competition. For example, it is unreasonable to assume, as just described, that the hardest tablet possible would also have the lowest compression and ejection forces and the fastest disintegration time and dissolution profile. It is sometimes necessary to trade off properties, that is, to sacrifice one characteristic for another. Thus, the primary objective may not be to optimize absolutely (i.e., a maxima or minima), but to realize an overall pre selected or desired result for each characteristic or parameter. Drug products are often developed by teaching an effective compromise between competing characteristics to achieve the best formulation and process within a given set of restrictions. [Pg.608]

Fletcher, R. 1980. Practical Methods of Optimization, Unconstrained Optimization, Vol. 1. Wiley, New York. [Pg.47]

Instead of a formal development of conditions that define a local optimum, we present a more intuitive kinematic illustration. Consider the contour plot of the objective function fix), given in Fig. 3-54, as a smooth valley in space of the variables X and x2. For the contour plot of this unconstrained problem Min/(x), consider a ball rolling in this valley to the lowest point offix), denoted by x. This point is at least a local minimum and is defined by a point with a zero gradient and at least nonnegative curvature in all (nonzero) directions p. We use the first-derivative (gradient) vector Vf(x) and second-derivative (Hessian) matrix V /(x) to state the necessary first- and second-order conditions for unconstrained optimality ... [Pg.61]

Second-Order Conditions As with unconstrained optimization, nonnegative (positive) curvature is necessary (sufficient) in all the allowable (i.e., constrained) nonzero directions p. The necessary second-order conditions can be stated as... [Pg.61]

Although K appears linearly in both response equations, rx in (2.12) and rx and r2 in (2.13) appear nonlinearly, so that nonlinear least squares must be used to estimate their values. The specific details of how to carry out the computations will be deferred until we take up numerical methods of unconstrained optimization in Chapter 6. [Pg.62]

Dennis, J. E. and R. B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs, New Jersey (1983) chapter 2. [Pg.176]

As mentioned earlier, nonlinear objective functions are sometimes nonsmooth due to the presence of functions like abs, min, max, or if-then-else statements, which can cause derivatives, or the function itself, to be discontinuous at some points. Unconstrained optimization methods that do not use derivatives are often able to solve nonsmooth NLP problems, whereas methods that use derivatives can fail. Methods employing derivatives can get stuck at a point of discontinuity, but the function-value-only methods are less affected. For smooth functions, however, methods that use derivatives are both more accurate and faster, and their advantage grows as the number of decision variables increases. Hence, we now turn our attention to unconstrained optimization methods that use only first partial derivatives of the objective function. [Pg.189]

Why is the steepest descent method not widely used in unconstrained optimization codes ... [Pg.214]

Cite two circumstances in which the use of the simplex method of multivariate unconstrained optimization might be a better choice than a quasi-Newton method. [Pg.215]

In problems in which there are n variables and m equality constraints, we could attempt to eliminate m variables by direct substitution. If all equality constraints can be removed, and there are no inequality constraints, the objective function can then be differentiated with respect to each of the remaining (n — m) variables and the derivatives set equal to zero. Alternatively, a computer code for unconstrained optimization can be employed to obtain x. If the objective function is convex (as in the preceding example) and the constraints form a convex region, then any stationary point is a global minimum. Unfortunately, very few problems in practice assume this simple form or even permit the elimination of all equality constraints. [Pg.266]

The condition number of the Hessian matrix of the objective function is an important measure of difficulty in unconstrained optimization. By definition, the smallest a condition number can be is 1.0. A condition number of 105 is moderately large, 109 is large, and 1014 is extremely large. Recall that, if Newton s method is used to minimize a function/, the Newton search direction s is found by solving the linear equations... [Pg.287]

Unconstrained optimization Nonlinear regression of VLE data (12.3) Minimum work of compression (13.2) 1 ) ... [Pg.416]


See other pages where Optimization unconstrained is mentioned: [Pg.79]    [Pg.716]    [Pg.206]    [Pg.83]    [Pg.62]    [Pg.188]    [Pg.211]    [Pg.298]    [Pg.322]    [Pg.333]    [Pg.402]   
See also in sourсe #XX -- [ Pg.183 ]

See also in sourсe #XX -- [ Pg.79 ]

See also in sourсe #XX -- [ Pg.79 ]

See also in sourсe #XX -- [ Pg.79 ]




SEARCH



Unconstrained

© 2024 chempedia.info