Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kuhn-Tucker conditions

Each of the inequality constraints gj(z) multiphed by what is called a Kuhn-Tucker multiplier is added to form the Lagrange function. The necessaiy conditions for optimality, called the Karush-Kuhn-Tucker conditions for inequality-constrained optimization problems, are... [Pg.484]

The first-order necessary conditions for problems with inequality constraints are called the Kuhn-Tucker conditions (also called Karush-Kuhn-Tucker conditions). The idea of a cone aids the understanding of the Kuhn-Tucker conditions (KTC). A cone is a set of points R such that, if x is in R, Tx is also in R for X 0. A convex cone is a cone that is a convex set. An example of a convex cone in two dimensions is shown in Figure 8.2. In two and three dimensions, the definition of a convex cone coincides with the usual meaning of the word. [Pg.273]

The Kuhn-Tucker conditions are predicated on this fact At any local constrained optimum, no (small) allowable change in the problem variables can improve the value bf the objective function. To illustrate this statement, consider the nonlinear programming problem ... [Pg.273]

Relations (8.23) and (8.24) are the form in which the Kuhn-Tucker conditions are usually stated. [Pg.277]

Finally, we should mention that in addition to solving an optimization problem with the aid of a process simulator, you frequently need to find the sensitivity of the variables and functions at the optimal solution to changes in fixed parameters, such as thermodynamic, transport and kinetic coefficients, and changes in variables such as feed rates, and in costs and prices used in the objective function. Fiacco in 1976 showed how to develop the sensitivity relations based on the Kuhn-Tucker conditions (refer to Chapter 8). For optimization using equation-based simulators, the sensitivity coefficients such as (dhi/dxi) and (dxi/dxj) can be obtained directly from the equations in the process model. For optimization based on modular process simulators, refer to Section 15.3. In general, sensitivity analysis relies on linearization of functions, and the sensitivity coefficients may not be valid for large changes in parameters or variables from the optimal solution. [Pg.525]

Quadratic programming (QP) is a special problem including a product of two decision variables in the objective function e.g. maximization of turnover max p x with p and x both variable requiring a concave objective function and that can be solved if the so-called Kuhn-Tucker-Conditions are fulfilled, e g. by use of the Wolf algorithm (Dom-schke/DrexI 2004, p. 192)... [Pg.70]

Necessary conditions for a local solution to (2) are given by the following Kuhn-Tucker conditions ... [Pg.200]

For these time periods, the ODEs and active algebraic constraints influence the state and control variables. For these active sets, we therefore need to be able to analyze and implicitly solve the DAE system. To represent the control profiles at the same level of approximation as for the state profiles, approximation and stability properties for DAE (rather than ODE) solvers must be considered. Moreover, the variational conditions for problem (16), with different active constraint sets over time, lead to a multizone set of DAE systems. Consequently, the analogous Kuhn-Tucker conditions from (27) must have stability and approximation properties capable of handling ail of these DAE systems. [Pg.239]

Under this assumption, (X,y) e S x R " must be an optimal solution that is, the solution that maximizes and minimizes the functions, respectively, for Eqs. (l)-(2) if and only if it satisfies the Karush-Kuhn-Tucker condition ... [Pg.111]

The basic idea of the active constraint strategy is to use the Kuhn-Tucker conditions to identify the potential sets of active constraints at the solution of NLP (4) for feasibility measure ip. Then resilience test problem (6) [or flexibility index problem (11)] is decomposed into a series of NLPs with a different set of constraints (a different potential set of active constraints) used in each NLP. [Pg.50]

The potential sets of active constraints are identified by applying the Kuhn-Tucker conditions to NLP (4) for feasibility measure tft (Grossmann and Floudas, 1985, 1987) ... [Pg.54]

Assuming that constraint functions fm, m G M, are all monotonic in z, the potential sets of active constraints can be determined from Kuhn-Tucker conditions (35b) and (35e) as follows (Grossmann and Floudas, 1987). If the constraint functions fm are monotonic in z, then every component of dfm/dz is one signed for all z for each possible value of 8. Since Am 0 must hold for each constraint m M [Eq. (35e)], Eq. (35b) identifies the different sets of n2 + 1 constraints which can satisfy the Kuhn-Tucker conditions (different potential sets MA of nz + 1 active constraints). [Pg.55]

Thus, from Kuhn-Tucker condition (35b), the potential sets of active constraints are identified as shown in Table VIII. For each potential set of active constraints MA(k), NLP (34 ) is formulated to determine trial resilience measure For example, for potential set MA(1) = (/i,/2), the following NLP is solved ... [Pg.58]

The Lagrangian Formulation and the Kuhn-Tucker Conditions. Formulate the Lagrangian function for the problem (Pi),... [Pg.207]

The algorithms for the solution of the Kuhn-Tucker conditions for a single unit or an integrated system of processing units, point the direction for the development of alternative practical control strategies for the on-line implementation of the optimizing controllers. [Pg.209]

The e-constraint method used here is based on the Kuhn-Tucker condition for non-inferior decision ( 2 ). The first equation of the Kuhn-Tucker condition can be rewritten as... [Pg.309]

Under these conditions the Kuhn-Tucker conditions for the e-constraint problem can be represented by using the new variables,... [Pg.335]

Successive quadratic programming solves a sequence of quadratic programming problems. A quadratic programming problem has a quadratic economic model and linear constraints. To solve this problem, the Lagrangian function is formed from the quadratic economic model and linear constraints. Then, the Kuhn-Tucker conditions are applied to the Lagrangian function to obtain a set of linear equations. This set of linear equations can then be solved by the simplex method for the optimum of the quadratic programming problem. [Pg.2447]

This formulation is a standard quadratic programming problem for which an analytical solution exists from the corresponding Kuhn—Tucker conditions. Different versions of the objective function are sometimes used, but the quadratic version is appealing theoretically because it allows investor preferences to be convex. [Pg.756]

Karush-Kuhn-Tucker Conditions 2554 6.1. Optimization Software 2563... [Pg.2540]

Kuhn-Tucker conditions and reduces the quadratic programming problem to what is referred to as a linear complementarity problem. [Pg.2556]


See other pages where Kuhn-Tucker conditions is mentioned: [Pg.681]    [Pg.165]    [Pg.229]    [Pg.317]    [Pg.200]    [Pg.223]    [Pg.55]    [Pg.60]    [Pg.137]    [Pg.207]    [Pg.172]    [Pg.16]    [Pg.186]    [Pg.2543]    [Pg.2553]   
See also in sourсe #XX -- [ Pg.267 ]

See also in sourсe #XX -- [ Pg.83 ]

See also in sourсe #XX -- [ Pg.200 ]

See also in sourсe #XX -- [ Pg.83 ]

See also in sourсe #XX -- [ Pg.248 , Pg.249 ]




SEARCH



Kuhn-Tucker

Tucker

© 2024 chempedia.info