Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Unconstrained optimization problem

Local Minimum Point for Unconstrained Problems Consider the following unconstrained optimization problem ... [Pg.484]

The random search technique can be applied to constrained or unconstrained optimization problems involving any number of parameters. The solution starts with an initial set of parameters that satisfies the constraints. A small random change is made in each parameter to create a new set of parameters, and the objective function is calculated. If the new set satisfies all the constraints and gives a better value for the objective function, it is accepted and becomes the starting point for another set of random changes. Otherwise, the old parameter set is retained as the starting point for the next attempt. The key to the method is the step that sets the new, trial values for the parameters ... [Pg.206]

Kowalik, J., and M.R. Osborne, Methods of Unconstrained Optimization Problems, Elsevier, New York, NY, 1968. [Pg.397]

There are two general types of optimization problem constrained and unconstrained. Constraints are restrictions placed on the system by physical limitations or perhaps by simple practicality (e.g., economic considerations). In unconstrained optimization problems there are no restrictions. For a given pharmaceutical system one might wish to make the hardest tablet possible. The constrained problem, on the other hand, would be stated make the hardest tablet possible, but it must disintegrate in less than 15 minutes. [Pg.608]

Within the realm of physical reality, and most important in pharmaceutical systems, the unconstrained optimization problem is almost nonexistent. There are always restrictions that the formulator wishes to place or must place on a system, and in pharmaceuticals, many of these restrictions are in competition. For example, it is unreasonable to assume, as just described, that the hardest tablet possible would also have the lowest compression and ejection forces and the fastest disintegration time and dissolution profile. It is sometimes necessary to trade off properties, that is, to sacrifice one characteristic for another. Thus, the primary objective may not be to optimize absolutely (i.e., a maxima or minima), but to realize an overall pre selected or desired result for each characteristic or parameter. Drug products are often developed by teaching an effective compromise between competing characteristics to achieve the best formulation and process within a given set of restrictions. [Pg.608]

Remark. The dimension of the unconstrained optimization problem is smaller than that of the original one. [Pg.99]

In the case of potential energy functions, unconstrained optimization problems can generally be formulated for large, nonlinear, and smooth functions. Obtaining first and second derivatives may be tedious but is definitely... [Pg.19]

The random search technique can be applied to constrained or unconstrained optimization problems involving any number of parameters. [Pg.221]

Given that there are several very high-performance methods for solving unconstrained optimization problems, is it possible to transform a constrained optimization problem into an unconstrained optimization problem. ... [Pg.419]

Chapter 4 has been devoted to large-scale unconstrained optimization problems, where problems related to the management of matrix sparsity and the ordering of rows and columns are broached. Hessian evaluation, Newton and inexact Newton methods are discussed. [Pg.517]

These conditions ensure the consistency conditions and the polynomial exactness of zero order. To further simplify optimization process, coefficients h ,h -, g ,g - are expressed as linear combinations of h, ... h -2,g, -gn-i and then we obtain an unconstrained optimization problem... [Pg.221]

From the given data, we then have n linear equations relating the actual and forecasted demands. Following the approach given in Srinivasan (2010), we can use the least square regression method to estimate the level (a) and trend (b) parameters. The unconstrained optimization problem is to determine a and b such that... [Pg.43]

Problems with the general form of Eq. (15.1) can be classified according to the nature of the objective function and constraints (linear, nonlinear, convex), the number of variables, large or small, the smoothness of the functions, differentiable or non-differentiable, and so on. An important distinction is between problems that have constraints on the variables and those that do not. Unconstrained optimization problems, for which we have = n, = 0 in Eq. (15.1), arise in many practical... [Pg.427]

The CE method is directed towards the solution of unconstrained optimization problems. Thus, in order to find X, it is necessary to pose the problem in Equation 4 (constrained optimization problem) in an alternative way. This is done by incorporating the constraint as a penalty term in the objective function, i.e. ... [Pg.12]

As already mentioned above, the suggested objective function requires the solution of an unconstrained optimization problem with cross-sectional areas as design variables. However, In general structural systems the plastic moduli W and the moment of inertia l prove to be significant as well. To maintain the considerable advantage of having only one design variable for each structural member, a direct relationship to the cross section Is assumed, I.e. [Pg.58]

The local unconstrained optimization problem in the Euclidean space SR" can be stated as in equation (1) for x e I> C SR" where V is region in the neighborhood of the starting point, xq. The global optimization problem requires X> to be the entire feasible space. [Pg.1144]

This formulation leads to an unconstrained optimization problem, where the feasible design set is limited only by the bounds of the design variables. In this context, several optimization... [Pg.3818]

In the preceding sections, we considered only unconstrained optimization problems in which X may take any value. Here, we extend these methods to constrained minimization problems, where to be acceptable (or feasible), x must satisfy a number e of equality constraints gi (x) = 0 and a number n of inequality constraints hj x) > 0, where each g, (x) and hj(x) are assumed to be differentiable nonlinear functions. This constrained optimization problem... [Pg.231]


See other pages where Unconstrained optimization problem is mentioned: [Pg.79]    [Pg.402]    [Pg.157]    [Pg.78]    [Pg.178]    [Pg.114]    [Pg.196]    [Pg.2546]    [Pg.262]    [Pg.263]    [Pg.212]   
See also in sourсe #XX -- [ Pg.157 ]




SEARCH



Optimization problems

Unconstrained

Unconstrained problems

© 2024 chempedia.info