Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Inequality equality-constrained problems

Inequality Constrained Problems To solve inequality constrained problems, a strategy is needed that can decide which of the inequality constraints should be treated as equalities. Once that question is decided, a GRG type of approach can be used to solve the resulting equality constrained problem. Solving can be split into two phases phase 1, where the go is to find a point that is feasible with respec t to the inequality constraints and phase 2, where one seeks the optimum while maintaining feasibility. Phase 1 is often accomphshed by ignoring the objective function and using instead... [Pg.486]

Equality- and Inequality-Constrained Problems—Kuhn-Tucker Multipliers Next a point is tested to see if it is an optimum one when there are inequality constraints. The problem is... [Pg.484]

This technique, combined with Fenske shortcut calculations for generating initial estimates of temperature profiles and stage liquid or vapor flow rates, is a robust method that can solve a large percentage of different types of separation processes. The algorithm also has provision for handling inequality specifications (Brannock et al., 1977). For each inequality specification, an alternate equality specification is required to ensure a unique solution. In this manner, the so-called over-constrained problems may be solved since inequality specifications are not subject to degrees of freedom restrictions. [Pg.453]

In an attempt to avoid the ill-conditioning that occurs in the regular pentilty tuid bturier function methods, Hestenes (1969) and PoweU (1969) independently developed a multiplier method for solving nonhnearly constrained problems. This multiplier method was originally developed for equality constraints and involves optimizing a sequence of unconstrained augmented Lagrtuigitui functions. It was later extended to handle inequality constraints by Rockafellar (1973). [Pg.2561]

Here we consider the augmented Lagrangian method, which converts the constrained problem into a sequence of imconstrained minimizations. We first treat equality constraints, and then extend the method to include inequality constraints. [Pg.232]

An important class of the constrained optimization problems is one in which the objective function, equality constraints and inequality constraints are all linear. A linear function is one in which the dependent variables appear only to the first power. For example, a linear function of two variables x and x2 would be of the general form ... [Pg.43]

In some problems the possible region of independent variables is defined by equality or inequality constraints. As you have seen in Section 1.2, such constrained extremum problems are easy to solve if both the constraints and the... [Pg.69]

In this approach, the process variables are partitioned into dependent variables and independent variables (optimisation variables). For each choice of the optimisation variables (sometimes referred to as decision variables in the literature) the simulator (model solver) is used to converge the process model equations (described by a set of ODEs or DAEs). Therefore, the method includes two levels. The first level performs the simulation to converge all the equality constraints and to satisfy the inequality constraints and the second level performs the optimisation. The resulting optimisation problem is thus an unconstrained nonlinear optimisation problem or a constrained optimisation problem with simple bounds for the associated optimisation variables plus any interior or terminal point constraints (e.g. the amount and purity of the product at the end of a cut). Figure 5.2 describes the solution strategy using the feasible path approach. [Pg.135]

After the inequality constraints have been converted to equalities, the complete set of restrictions becomes a set of linear equations with n unknowns. The linear-programming problem then will involve, in general, maximizing or minimizing a linear objective function for which the variables must satisfy the set of simultaneous restrictive equations with the variables constrained to be nonnegative. Because there will be more unknowns in the set of simultaneous equations than there are equations, there will be a large number of possible solutions, and the final solution must be chosen from the set of possible solutions. [Pg.384]

Linear Programming The combined term linear programming is given to any method for finding where a given linear function of several variables takes on an extreme value, and what that value is, when the variables are nonnegative and are constrained by linear equalities or inequalities. A very general problem consists of maximiz-ing / = 2)/=i CjZj subject to the constraints Zj > 0 (j = 1, 2,. . . , n) and 2)"=i a Zj < b, (i = 1, 2,. . ., m). With S the set of all points whose coordinates Zj satisfy all the constraints, we must ask three questions (1) Are the constraints consistent If not, S is empty and there is no solution. (2) If S is not empty, does the function/become unbounded on S If so, the problem has no solution. If not, then there is a point B of S that is optimal in the sense that if Q is any point of S then/(Q) ifP)- (3) How can we find P ... [Pg.313]

The conditions yielding the unconstrained maximum centerline deposition rate give a deposition uniformity of only about 25%. While this may well be acceptable for some fiber coating processes, there are likely applications for which it is not. We now consider the problem of maximizing the centerline deposition rate, subject to an additional constraint that the deposition uniformity satisfies some minimum requirement. Assuming that the required uniformity is better than that obtained in the unconstrained case, the constrained maximum centerline deposition rate should occur when the uniformity constraint is just marginally satisfied. This permits replacing the inequality constraint of a minimum uniformity by an equality constraint that is satisfied exactly. [Pg.197]

In summary, condition 1 gives a set of n algebraic equations, and conditions 2 and 3 give a set of m constraint equations. The inequality constraints are converted to equalities using h slack variables. A total of M + m constraint equations are solved for n variables and m Lagrange multipliers that must satisfy the constraint qualification. Condition 4 determines the value of the h slack variables. This theorem gives an indirect problem in which a set of algebraic equations is solved for the optimum of a constrained optimization problem. [Pg.2443]

Linear Programming The combined term linear programming is given to any method for finding where a given linear function of several variables takes on an extreme value, and what that value is, when the variables are nonnegative and are constrained by linear equalities or inequalities. A very general problem consists of maximiz-... [Pg.490]

When solving an inequality-constrained optimal control problem numerically, it is impossible to determine which constraints are active. The reason is one cannot obtain a p, exactly equal to zero. This difficulty is surmounted by considering a constraint to be active if the corresponding p < a where a is a small positive number such as 10 or less, depending on the problem. Slack variables may be used to convert inequalities into equalities and utilize the Lagrange Multiplier Rule. [Pg.115]

This section deals with optimal control problems constrained by algebraic equalities and inequalities. [Pg.163]

The simplest optimization problems are those without equality constraints, inequality constraints, and lower and upper bounds. They are referred to as unconstrained optimization. Otherwise, if one or more constraints apply, the problem is one in constrained optimization. [Pg.619]

Constrained optimization is broached starting from Chapter 9. The constraints are split into three categories bounds, equality constraints, and inequality constraints. The relationship between primal and dual problems is discussed in further depth. [Pg.517]

As mentioned earlier, the developed algorithm employs dynopt to solve the intermediate problems associated with the local interaction of the agents. Specifically, dynopt is a set of MATLAB functions that use the orthogonal collocation on finite elements method for the determination of optimal control trajectories. As inputs, this toolbox requires the dynamic process model, the objective function to be minimized, and the set of equality and inequality constraints. The dynamic model here is described by the set of ordinary differential equations and differential algebraic equations that represent the fermentation process model. For the purpose of optimization, the MATLAB Optimization Toolbox, particularly the constrained nonlinear rninimization routine fmincon [29], is employed. [Pg.122]

All the questions posed in the Data Collaboration framework appear as constrained optimization problems [72]. Typically, there are both inequality and equality constraints. If /, g, and h are functions, then a constrained optimization problem is of the form... [Pg.280]

This is a typical minimization problem for a function of n variables that can be solved using a Mathcad built-in function MINIMIZE. The latter implements gradient search algorithms to find the local minimum. The SSq function in this case is called the target function, and the unknown kinetic constants are the optimization parameters. When there are no additional limitations for the values of optimization parameters or the sought function, we have a case of the so called unconstrained optimization. Likewise, if the unknown parameters or the target function itself are mathematically constrained with some equalities or inequalities, then one deals with the constrained optimization. Such additional constrains are usually set on the basis on the physical nature of the problems (e.g. rate constants must be positive, a ratio of the direct reaction rate to that of the inverse one must equal the equilibrium constant, etc.) The constraints are sometimes added in order to speed up the computations (for example, the value of target function in the found minimum should not exceed some number TOL). [Pg.133]

Optimization This refers to minimizing or maximizing a real-valued function f(x). The permitted values for x = (xj,..., xj can be either constrained or unconstrained. The linear programming problem is a well-known and important case f(x) is linear, and there are linear equality and/or inequality constraints on x. [Pg.37]

The number of independent variables in a constrained optimization problem can be found by a procedure analogous to the degrees of freedom analysis in Chapter 2. For simplicity, suppose that there are no constraints. If there are Ny process variables (which includes process inputs and outputs) and the process model consists of Ne independent equations, then the number of independent variables is Np = Ny - Ne-This means Np set points can be specified independently to maximize (or minimize) the objective function. The corresponding values of the remaining (Ny - Np) variables can be calculated from the process model. However, the presence of inequality constraints that can become active changes the situation, because the Np set points cannot be selected arbitrarily. They must satisfy all of the equality and inequality constraints. [Pg.377]

A quadratic programming problem minimizes a quadratic function of n variables subject to m linear inequality or equality constraints. A convex QP is the simplest form of a nonlinear programming problem with inequality constraints. A number of practical optimization problems are naturally posed as a QP problem, such as constrained least squares and some model predictive control problems. [Pg.380]

An optimization problem may be unconstrained, in which case each xj can take any real value, or it can be constrained, such that an allowable jc must satisfy some collection of equality and inequality constraints... [Pg.212]

In the preceding sections, we considered only unconstrained optimization problems in which X may take any value. Here, we extend these methods to constrained minimization problems, where to be acceptable (or feasible), x must satisfy a number e of equality constraints gi (x) = 0 and a number n of inequality constraints hj x) > 0, where each g, (x) and hj(x) are assumed to be differentiable nonlinear functions. This constrained optimization problem... [Pg.231]

The augmented Lagrangian method is not the only approach to solving constrained optimization problems, yet a complete discussion of this subject is beyond the scope of this text. We briefly consider a popular, and efficient, class of methods, as it is used by fmincon, sequential quadratic programming (SQP). We wUl find it useful to introduce a common notation for the equality and inequality constraints using slack variables. [Pg.240]


See other pages where Inequality equality-constrained problems is mentioned: [Pg.24]    [Pg.2443]    [Pg.404]    [Pg.405]    [Pg.1142]    [Pg.166]    [Pg.284]    [Pg.348]    [Pg.157]    [Pg.49]    [Pg.68]    [Pg.187]    [Pg.127]   
See also in sourсe #XX -- [ Pg.404 , Pg.405 ]




SEARCH



Constrained equality/inequality

Equal

Equaling

Equality

Equality- and Inequality-Constrained Problems

Equality-constrained

Equality-constrained problems

Equalization

Inequalities

Inequality problems

Inequality-constrained problems

© 2024 chempedia.info