Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Constrained minimization problems

The point where the constraint is satisfied, (x0,yo), may or may not belong to the data set (xj,yj) i=l,...,N. The above constrained minimization problem can be transformed into an unconstrained one by introducing the Lagrange multiplier, to and augmenting the least squares objective function to form the La-grangian,... [Pg.159]

As we mentioned earlier, this is not a typical constrained minimization problem although the development of the solution method is very similar to the material presented in Chapter 9. If we assume that an estimate k(J) is available at the j,h iteration, a better estimate, k(J+l), of the parameter vector is obtained as follows. [Pg.240]

A radically different approach to the steady-state problem was investigated by Hsing (H6). In this approach the steady-state flow problem was formulated as the following constrained minimization problem ... [Pg.159]

The design of the network calls for the selection of pipe diameters such that the discharge through each valve attains the maximum (sonic) velocity for an initial transitory period. Since the flare pressure and the process unit pressures are specified, this requirement amounts to the stipulation of a maximum allowable pressure drop over each path Sj (labeled with a roman numeral) from the valve to the flare. The optimal design in this case may be formulated as the following constrained minimization problem ... [Pg.176]

The constrained minimization problem stated above may be transformed into a form well-suited to gradient projection methods of nonlinear programming by making the following substitution ... [Pg.177]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

This problem can be translated into one of linear programming. Introducing the variables s > 0 we first construct an equivalent constrained minimization problem given by... [Pg.51]

In addition, we are interested in functions that are at least twice continuously differentiable. One can draw several such curves satisfying (4.27), and the "smoothest" of them is the one minimizing the integral (4.19). It can be shown that the solution of this constrained minimization problem is a natural cubic spline (ref. 12). We call it smoothing spline. [Pg.241]

A multi-objective constrained minimization problem is represented as in Eq. (5.1). [Pg.132]

The weighted smoothing method tries to find a compromise between the two contradictory requirements of high smoothness and low smoothing error. This compromise is controlled via an additional weighting parameter X > nd the following constrained minimization problem is solved ... [Pg.62]

This taboo search approach is then realized by solving the constrained minimization problems ... [Pg.63]

Therefore, the Levenberg-Marquardt method solves the problem of the constrained minimum for the specific value of d, for which the relation (3.112), in which the solution d of the system (3.122) is placed, is verified. This means that d is function of d obtained for a specific y. By varying y, a new constrained minimization problem is solved with a different d. [Pg.123]

This chapter provides an informal discussion of the basic concepts behind the minimization of a function F x) vith constrained variables x. The necessary and sufficient conditions to solve a constrained minimization problem are called KKT conditions (by Karush, Kuhn, and Tucker) or Fritz John conditions in certain specific situations. [Pg.344]

Suppose we know an algorithm that converges quickly to the solution of a constrained minimization problem. [Pg.427]

As already emphasized, it is crucial to insert the bounds on each variable in the constrained minimization problems since the search region is reduced, and special features of said bounds can be exploited. [Pg.434]

In the BzzMath library, the class used to solve a constrained minimization problem... [Pg.441]

The constraint in Eq. (19.93) ensures that the estimator does not affect the optimality of the estimator Ay when the two are combined. The constrained minimization problem in Eq. (19.93) results in the estimator ei y = which is simply the complex exponential of the noisy signal. [Pg.2092]

However, for the multivariable case, the secant relation alone is not enough to define B. A possibility consists of given a matrix B, calculate the least change for that satisfies the secant formula [42], This is a constrained minimization problem that can be written as follows ... [Pg.325]

Solve the constrained minimization problem using NPSOL. [Pg.347]

Apart from the quasi-Newton condition we have no further information about M. Thus we should preserve as much as possible of what we already have. This is done by choosing as the solution of the constrained minimization problem... [Pg.49]

A formula (procedure), which modifies a given matrix so that the new matrix is a solution of the constrained minimization problem (14), is called a least--change update. The term "update will be clear when we consider some updates in detail. [Pg.49]

The MS-update (see Eq. (17)) is not a least-change update, i.e. it cannot be obtained as a solution of the constrained minimization problem (14), but it belongs to another important class of updates, namely to the Broyden s class (see below). A symmetric least-change update must be at least of rank two. [Pg.50]

A convenient way to solve constrained minimization problems is by using a Lagrangian function of the problem defined in Eq. [20] ... [Pg.311]

In the preceding sections, we considered only unconstrained optimization problems in which X may take any value. Here, we extend these methods to constrained minimization problems, where to be acceptable (or feasible), x must satisfy a number e of equality constraints gi (x) = 0 and a number n of inequality constraints hj x) > 0, where each g, (x) and hj(x) are assumed to be differentiable nonlinear functions. This constrained optimization problem... [Pg.231]


See other pages where Constrained minimization problems is mentioned: [Pg.428]    [Pg.166]    [Pg.237]    [Pg.177]    [Pg.187]    [Pg.258]    [Pg.177]    [Pg.756]    [Pg.309]    [Pg.315]    [Pg.320]    [Pg.642]    [Pg.46]    [Pg.52]    [Pg.53]    [Pg.528]    [Pg.403]   
See also in sourсe #XX -- [ Pg.311 ]




SEARCH



Minimization problem

© 2024 chempedia.info