Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization subproblem

Reduced gradient method. This technique is based on the resolution of a sequence of optimization subproblems for a reduced space of variables. The process constraints are used to solve a set of variables (zd), called basic or dependent, in terms of the others, which are known as nonbasic or independent (zi). Using this categorization of variables, problem (5.3) is transformed into another one of fewer dimensions ... [Pg.104]

The TMP design optimization is a MINLP (mixed-integer non-linear programming) problem since it has both a discrete, A Ref, and a continuous, Fu,nk, design parameter. The operational optimization subproblem has integer decision variables (number of active refiners in time) affecting the continuous state of intermediate tank volume through process dynamics. The tank volume is constrained to stay between a minimum and a maximum volume. [Pg.311]

The idea in trust-region methods - the origin of the quadratic optimization subproblem in step 2 above - is to determine the vector s on the basis of the size of region within which the quadratic functional approximation can be trusted (i.e., is reasonable). The quality of the quadratic approximation can be assessed from the following ratio ... [Pg.1147]

The formulation for this scenario entails 1411 constraints, 511 continuous and 120 binary variables. The reduction in continuous variables compared to scenario 1 is due to the absence of linearization variables, since no attempt was made to linearize the scenario 2 model as explained in Section 4.3. An average of 1100 nodes were explored in the branch and bound search tree during the three major iterations between the MILP master problem and the NLP subproblem. The problem was solved in 6.54 CPU seconds resulting in an optimal objective of 2052.31 kg, which corresponds to 13% reduction in freshwater requirement. The corresponding water recycle/reuse network is shown in Fig. 4.10. [Pg.91]

The LP solutions in the nodes control the sequence in which the nodes are visited and provide conservative lower bounds (in case of minimization problems) with respect to the objective on the subsequent subproblems. If this lower bound is higher than the objective of the best feasible solution found so far, the subsequent nodes can be excluded from the search without excluding the optimal solution. Each feasible solution corresponds to a leaf node and provides a conservative upper bound on the optimal solution. This combination of branching and bounding or cutting steps leads to the implicit enumeration of all integer solutions without having to visit all leaf nodes. [Pg.157]

Basically, there are two different ways to decompose a 2S-MILP (see Figure 9.10). The scenario decomposition separates the 2S-MILP by the constraints associated to a scenario, whereas the stage decomposition separates the variables into first-stage and second-stage decisions. For both approaches, the resulting subproblems are MILPs which can be solved by standard optimization software. [Pg.199]

Like penalty methods, barrier methods convert a constrained optimization problem into a series of unconstrained ones. The optimal solutions to these unconstrained subproblems are in the interior of the feasible region, and they converge to the constrained solution as a positive barrier parameter approaches zero. This approach contrasts with the behavior of penalty methods, whose unconstrained subproblem solutions converge from outside the feasible region. [Pg.291]

The SLP subproblem at (4,3.167) is shown graphically in Figure 8.9. The LP solution is now at the point (4, 3.005), which is very close to the optimal point x. This point (x ) is determined by linearization of the two active constraints, as are all further iterates. Now consider Newton s method for equation-solving applied to the two active constraints, x2 + y2 = 25 and x2 — y2 = 7. Newton s method involves... [Pg.296]

We now ask the reader to start Excel, either construct or open this model, and solve it after checking the Show Iteration Results box in the Solver Options dialog (see Figure E9.2d). The sequence of solutions produced is the same as is shown in the BB tree of Figure E9.2b. The initial solution displayed has all four variables equal to zero, indicating the start of the LP solution at node 1. After a few iterations, the optimal node 1 solution is obtained. The solver then creates and solves the node 2 subproblem and displays its solution after a few simplex iterations. Finally, the node 3 subproblem is created and solved, after which an optimality message is shown. [Pg.361]

We redefined the sense of the optimization to be maximization. The optimal objective value of this problem is a lower bound on the MINLP optimal value. The MILP subproblem involves both the x and y variables. At iteration k, it is formed by linearizing all nonlinear functions about the optimal solutions of each of the subproblems NLP (y ),/ = 1,. .., , and keeping all of these linearizations. If x solves NLP(yl), the MILP subproblem at iteration k is... [Pg.369]

Table 9.1 shows how outer approximation, as implemented in the DICOPT software, performs when applied to the process selection model in Example 9.3. Note that this model does not satisfy the convexity assumptions because its equality constraints are nonlinear. Still DICOPT does find the optimal solution at iteration 3. Note, however, that the optimal MILP objective value at iteration 3 is 1.446, which is not an upper bound on the optimal MINLP value of 1.923 because the convexity conditions are violated. Hence the normal termination condition that the difference between upper and lower bounds be less than some tolerance cannot be used, and DICOPT may fail to find an optimal solution. Computational experience on nonconvex problems has shown that retaining the best feasible solution found thus far, and stopping when the objective value of the NLP subproblem fails to improve, often leads to an optimal solution. DICOPT stopped in this example because the NLP solution at iteration 4 is worse (lower) than that at iteration 3. [Pg.370]

More variables are retained in this type of NLP problem formulation, but you can take advantage of sparse matrix routines that factor the linear (and linearized) equations efficiently. Figure 15.5 illustrates the sparsity of the Hessian matrix used in the QP subproblem that is part of the execution of an optimization of a plant involving five unit operations. [Pg.528]

Sequential quadratic programming. A sequential quadratic programming (SQP) technique involves the resolution of a sequence of explicit quadratic programming (QP) subproblems. The solution of each subproblem produces the search direction d that has to be taken to reach the next iterate zk+i from the current iterate zk. A one-dimensional search is then accomplished in the direction dt to obtain the optimal step size. [Pg.104]

To apply the procedure, the nonlinear constraints Taylor series expansion and an optimization problem is resolved to find the solution, d, that minimizes a quadratic objective function subject to linear constraints. The QP subproblem is formulated as follows ... [Pg.104]

The incorporation and relaxation of dummy elements for the simultaneous solution of parameter optimization problems can be embedded easily within the subproblems solved by the SQP algorithm. Again, both the model and optimization problem need only be solved once. [Pg.226]

The modeling system GAMS (Brooke et al., 1996) is used for setting up the optimization models. The computational tests were carried out on a Pentium M processor 2.13 GHz. The models were solved with DICOPT (Viswanathan and Gross-mann, 1990). The NLP subproblems were solved with CONOPT2 (Drud, 1994), while the MILP master problems were solved with CPLEX (CPLEX Optimization Inc, 1993). [Pg.148]

Let (CS) be a candidate subproblem in solving (P). We would like to determine whether the feasible region of (CS), F(CS), contains an optimal solution of (P) and find it if it does. [Pg.100]

A number of proposed branch and bound algorithms solve the relaxation subproblems (RCS) to optimality first and subsequently apply the aforementioned fathoming tests. There exist, however, algorithms which either do not solve the (RCS) to optimality but instead apply sufficient conditions for the fathoming criterion (e.g., use of good suboptimal solutions of dual) or not only solve the (RCS) to optimality but also apply a post optimality test aiming at improving the lower bounds obtained by the relaxation. [Pg.101]

The basic ideas in a branch and bound algorithm are outlined in the following. First we make a reasonable effort to solve the original problem (e.g., considering a relaxation of it). If the relaxation does not result in a 0 - 1 solution for the y-variables, then we separate the root node into two or more candidate subproblems at level 1 and create a list of candidate subproblems. We select one of the candidate subproblems of level 1, we attempt to solve it, and if its solution is integral, then we return to the candidate list of subproblems and select a new candidate subproblem. Otherwise, we separate the candidate subproblem into two or more subproblems at level 2 and add its children nodes to the list of candidate subproblems. We continue this procedure until the candidate list is exhausted and report as optimal solution the current incumbent. Note that the finite termination of such a procedure is attained if the set of feasible solutions of the original problem (P), denoted as FS(P) is finite. [Pg.101]


See other pages where Optimization subproblem is mentioned: [Pg.244]    [Pg.455]    [Pg.10]    [Pg.308]    [Pg.441]    [Pg.363]    [Pg.1147]    [Pg.244]    [Pg.455]    [Pg.10]    [Pg.308]    [Pg.441]    [Pg.363]    [Pg.1147]    [Pg.454]    [Pg.87]    [Pg.63]    [Pg.64]    [Pg.64]    [Pg.69]    [Pg.106]    [Pg.143]    [Pg.284]    [Pg.297]    [Pg.301]    [Pg.362]    [Pg.369]    [Pg.369]    [Pg.389]    [Pg.543]    [Pg.86]    [Pg.203]    [Pg.244]    [Pg.158]    [Pg.88]    [Pg.96]   
See also in sourсe #XX -- [ Pg.2 , Pg.1147 ]




SEARCH



© 2024 chempedia.info