Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Active constraints

Constraint control strategies can be classified as steady-state or dynamic. In the steady-state approach, the process dynamics are assumed to be much faster than the frequency with which the constraint control appHcation makes its control adjustments. The variables characterizing the proximity to the constraints, called the constraint variables, are usually monitored on a more frequent basis than actual control actions are made. A steady-state constraint appHcation increases (or decreases) a manipulated variable by a fixed amount, the value of which is determined to be safe based on an analysis of the proximity to relevant constraints. Once the appHcation has taken the control action toward or away from the constraint, it waits for the effect of the control action to work through the lower control levels and the process before taking another control step. Usually these steady-state constraint controls are implemented to move away from the active constraint at a faster rate than they do toward the constraint. The main advantage of the steady-state approach is that it is predictable and relatively straightforward to implement. Its major drawback is that, because it does not account for the dynamics of the constraint and manipulated variables, a conservative estimate must be taken in how close and how quickly the operation is moved toward the active constraints. [Pg.77]

Based on the above, we can develop an "adaptive" Gauss-Newton method for parameter estimation with equality constraints whereby the set of active constraints (which are all equalities) is updated at each iteration. An example is provided in Chapter 14 where we examine the estimation of binary interactions parameters in cubic equations of state subject to predicting the correct phase behavior (i.e., avoiding erroneous two-phase split predictions under certain conditions). [Pg.166]

In order to maintain feasibility, periodic updating of state variables and relinearization of active constraints are carried out. [Pg.183]

Constraint Qualification For a local optimum to satisfy the KKT conditions, an additional regularity condition is required on the constraints. This can be defined in several ways. A typical condition is that the active constraints at x be linearly independent i.e., the matrix [Vh(x ) I VgA(x )] is full column rank, where gA is the vector of inequality constraints with elements that satisfy g x ) = 0. With this constraint qualification, the KKT multipliers (X, v) are guaranteed to be unique at the optimal solution. [Pg.61]

If the matrix Q is positive semidefinite (positive definite) when projected into the null space of the active constraints, then (3-98) is (strictly) convex and the QP is a global (and unique) minimum. Otherwise, local solutions exist for (3-98), and more extensive global optimization methods are needed to obtain the global solution. Like LPs, convex QPs can be solved in a finite number of steps. However, as seen in Fig. 3-57, these optimal solutions can lie on a vertex, on a constraint boundary, or in the interior. A number of active set strategies have been created that solve the KKT conditions of the QP and incorporate efficient updates of active constraints. Popular methods include null space algorithms, range space methods, and Schur complement methods. As with LPs, QP problems can also be solved with interior point methods [see Wright (1996)]. [Pg.62]

As mentioned in Chapter 1, the occurrence of linear inequality constraints in industrial processes is quite common. Inequality constraints do not affect the count of the degrees of freedom unless they become active constraints. Examples of such constraints follow ... [Pg.69]

These can also become active constraints if the optimum lies on the constraint boundary. Note that we can also place inequality constraints on production of E, F, and G in order to satisfy market demand or sales constraints... [Pg.72]

We can state these ideas precisely as follows. Consider any optimization problem with n variables, let x be any feasible point, and let act(x) be the number of active constraints at x. Recall that a constraint is active at x if it holds as an equality there. Hence equality constraints are active at any feasible point, but an inequality constraint may be active or inactive. Remember to include simple upper or lower bounds on the variables when counting active constraints. We define the number of degrees of freedom (dof) at x as... [Pg.229]

In nonlinear programming problems, optimal solutions need not occur at vertices and can occur at points with positive degrees of freedom. It is possible to have no active constraints at a solution, for example in unconstrained problems. We consider nonlinear problems with constraints in Chapter 8. [Pg.229]

The KTC comprise both the necessary and sufficient conditions for optimality for smooth convex problems. In the problem (8.25)-(8.26), if the objective fix) and inequality constraint functions gj are convex, and the equality constraint functions hj are linear, then the feasible region of the problem is convex, and any local minimum is a global minimum. Further, if x is a feasible solution, if all the problem functions have continuous first derivatives at x, and if the gradients of the active constraints at x are independent, then x is optimal if and only if the KTC are satisfied at x. ... [Pg.280]

If no active constraints occur (so x is an unconstrained stationary point), then (8.32a) must hold for all vectors y, and the multipliers A and u are zero, so V L = V /. Hence (8.32a) and (8.32b) reduce to the condition discussed in Section 4.5 that if the Hessian matrix of the objective function, evaluated at x, is positive-definite and x is a stationary point, then x is a local unconstrained minimum of/. [Pg.282]

The second-order necessary conditions require this matrix to be positive-semidefinite on the tangent plane to the active constraints at (0,0), as defined in expression (8.32b). Here, this tangent plane is the set... [Pg.283]

The requirement that there be at least n independent linearized constraints at x is included to rule out situations where, for example, some of the active constraints are just multiples of one another. In the example dof(x) = 0. [Pg.294]

If dof(x) = n — act(x) = d > 0, then there are more problem variables than active constraints at x, so the (n-d) active constraints can be solved for n — d dependent or basic variables, each of which depends on the remaining d independent or nonbasic variables. Generalized reduced gradient (GRG) algorithms use the active constraints at a point to solve for an equal number of dependent or basic variables in terms of the remaining independent ones, as does the simplex method for LPs. [Pg.295]

The SLP subproblem at (4,3.167) is shown graphically in Figure 8.9. The LP solution is now at the point (4, 3.005), which is very close to the optimal point x. This point (x ) is determined by linearization of the two active constraints, as are all further iterates. Now consider Newton s method for equation-solving applied to the two active constraints, x2 + y2 = 25 and x2 — y2 = 7. Newton s method involves... [Pg.296]

Because we now have reached an active constraint, use it to solve for one variable in terms of the other, as in the earlier equality constrained example. Let x be the basic, or dependent, variable, and y and s the nonbasic (independent) ones. Solving the constraint for x in terms of y and the slack s yields... [Pg.311]

Because analytic solution of the active constraints for the basic variables is rarely possible, especially when some of the constraints are nonlinear, a numerical procedure must be used. GRG uses a variation of Newton s method which, in this example, works as follows. With 5 = 0, the equation to be solved for x is... [Pg.313]

This variation on Newton s method usually requires more iterations than the pure version, but it takes much less work per iteration, especially when there are two or more basic variables. In the multivariable case the matrix Vg(x) (called the basis matrix, as in linear programming) replaces dg/dx in the Newton equation (8.85), and g(Xo) is the vector of active constraint values at x0. [Pg.314]

Of course, Newton s method does not always converge. GRG assumes Newton s method has failed if more than ITLIM iterations occur before the Newton termination criterion (8.86) is met or if the norm of the error in the active constraints ever increases from its previous value (an occurrence indicating that Newton s method is diverging). ITLIM has a default value of 10. If Newton s method fails but an improved point has been found, the line search is terminated and a new GRG iteration begins. Otherwise the step size in the line search is reduced and GRG tries again. The output from GRG that shows the progress of the line search at iteration 4 is... [Pg.314]

Note that as the line search process continues and the total step from the initial point gets larger, the number of Newton iterations generally increases. This increase occurs because the linear approximation to the active constraints, at the initial point (0.697,1.517), becomes less and less accurate as we move further from that point. [Pg.315]

GRG2 represents the problem Jacobian (i.e., the matrix of first partial derivatives) as a dense matrix. As a result, the effective limit on the size of problems that can be solved by GRG2 is a few hundred active constraints (excluding variable bounds). Beyond this size, the overhead associated with inversion and other linear algebra operations begins to severely degrade performance. References for descriptions of the GRG2 implementation are in Liebman et al. (1985) and Lasdon et al. (1978). [Pg.320]

For these time periods, the ODEs and active algebraic constraints influence the state and control variables. For these active sets, we therefore need to be able to analyze and implicitly solve the DAE system. To represent the control profiles at the same level of approximation as for the state profiles, approximation and stability properties for DAE (rather than ODE) solvers must be considered. Moreover, the variational conditions for problem (16), with different active constraint sets over time, lead to a multizone set of DAE systems. Consequently, the analogous Kuhn-Tucker conditions from (27) must have stability and approximation properties capable of handling ail of these DAE systems. [Pg.239]

Therefore, for large optimal control problems, the efficient exploitation of the structure (to obtain 0(NE) algorithms) still remains an unsolved problem. As seen above, the structure of the problem can be complicated greatly by general inequality constraints. Moreover, the number of these constraints will also grow linearly with the number of elements. One can, in fact, formulate an infinite number of constraints for these problems to keep the profiles bounded. Of course, only a small number will be active at the optimal solution thus, adaptive constraint addition algorithms can be constructed for selecting active constraints. [Pg.249]


See other pages where Active constraints is mentioned: [Pg.64]    [Pg.76]    [Pg.206]    [Pg.63]    [Pg.64]    [Pg.229]    [Pg.229]    [Pg.229]    [Pg.274]    [Pg.281]    [Pg.282]    [Pg.294]    [Pg.295]    [Pg.297]    [Pg.407]    [Pg.200]    [Pg.203]    [Pg.205]    [Pg.205]    [Pg.249]    [Pg.26]    [Pg.16]    [Pg.17]    [Pg.18]    [Pg.20]    [Pg.21]   
See also in sourсe #XX -- [ Pg.229 , Pg.274 ]




SEARCH



© 2024 chempedia.info