Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Solutions optimal bounding

Appendix A Two-Stage Stochastic Programming 183 Appendix B Chance Constrained Programming 185 Appendix C SAA Optimal Solution Bounding 187... [Pg.1]

W. J. Lee. Determining order quantity and selling price by geometric programming Optimal solution, bounds and sensitivity. Decision Sciences, 24(76-87), 1993. [Pg.388]

The disadvantage of the lower-bound elimination criterion is that it cannot eliminate solutions with lower bounds better than the optimal solution objective function value. [Pg.281]

In addition to the elimination of partial solutions on the basis of their lower-bound values, we can provide two mechanisms that operate directly on pairs of partial solutions. These two mechanisms are based on dominance and equivalence conditions. The utility of these conditions comes from the fact that we need not have found a feasible solution to use them, and that the lower-bound values of the eliminated solutions do not have to be higher than the objective function value of the optimal solution. This is particularly important in scheduling problems where one may have a large number of equivalent schedules due to the use of equipment with identical processing characteristics, and many batches with equivalent demands on the available resources. [Pg.282]

In an earlier section, we had alluded to the need to stop the reasoning process at some point. The operationality criterion is the formal statement of that need. In most problems we have some understanding of what properties are easy to determine. For example, a property such as the processing time of a batch is normally given to us and hence is determined by a simple database lookup. The optimal solution to a nonlinear program, on the other hand, is not a simple property, and hence we might look for a simpler explanation of why two solutions have equal objective function values. In the case of our branch-and-bound problem, the operationality criterion imposes two requirements ... [Pg.318]

The corresponding mathematical formulation entails 5534 constraints, 1217 continuous and 280 binary variables. An average of 4000 nodes were explored in the branch and bound search tree. The solution required three major iterations and took 309.41 CPU seconds to obtain the optimal solution of 1285.50 kg. This corresponds to 45.53% reduction in freshwater demand. A water reuse/recycle network that corresponds to this solution is shown in Fig. 4.11. [Pg.91]

The LP solutions in the nodes control the sequence in which the nodes are visited and provide conservative lower bounds (in case of minimization problems) with respect to the objective on the subsequent subproblems. If this lower bound is higher than the objective of the best feasible solution found so far, the subsequent nodes can be excluded from the search without excluding the optimal solution. Each feasible solution corresponds to a leaf node and provides a conservative upper bound on the optimal solution. This combination of branching and bounding or cutting steps leads to the implicit enumeration of all integer solutions without having to visit all leaf nodes. [Pg.157]

Plugging the first-stage solution of the EV problem xEV into the stochastic program (2S-MILP) gives the expected result of using the EV solution (EEV problem). The solution of the EEV problem is not necessarily optimal for the original 2S-MILP. Consequently, the optimal objective value of the EEV problem is always greater than (or at least equal to) the optimal objective value of the 2S-MILP, such that the objective of EEV is an upper bound for the optimal solution of the 2S-MILP ... [Pg.198]

Results 1 and 2 imply that, in searching for an optimal solution, we need only consider vertices, hence only basic feasible solutions. Because a basic feasible solution has m basic variables, an upper bound to the number of basic feasible solutions is the number of ways m variables can be selected from a group of n variables, which is... [Pg.229]

Currently, a good LP solver running on a fast (> 500 mHz) PC with substantial memory, solves a small LP in less than a second, a medium-size LP in minutes to tens of minutes, and a large LP in an hour or so. These codes hardly ever fail, even if the LP is badly formulated or scaled. They include preprocessing procedures that detect and remove redundant constraints, fixed variables, variables that must be at bounds in any optimal solution, and so on. Preprocessors produce an equivalent LP, usually of reduced size. A postprocessor then determines values of any removed variables and Lagrange multipliers for removed constraints. Automatic scaling of variables and constraints is also an option. Armed with such tools, an analyst can solve virtually any LP that can be formulated. [Pg.244]

This transportation problem is an example of an important class of LPs called network flow problems Find a set of values for the flow of a single commodity on the arcs of a graph (or network) that satisfies both flow conservation constraints at each node (i.e., flow in equals flow out) and upper and lower limits on each flow, and maximize or minimize a linear objective (say, total cost). There are specified supplies of the commodity at some nodes and demands at others. Such problems have the important special property that, if all supplies, demands, and flow bounds are integers, then an optimal solution exists in which all flows are integers. In addition, special versions of the simplex method have been developed to solve network flow problems with hundreds of thousands of nodes and arcs very quickly, at least ten times faster than a general LP of comparable size. See Glover et al. (1992) for further information. [Pg.252]

Figure 8.7 also shows these LP constraints. Its optimal solution is at (Ax, Ay) = (1, 1), which gives (x , yn) = (3, 3). This point is determined entirely by the step bounds. This is an improved point, as can be seen by evaluating the original functions, so we set xc = xn and repeat these steps to get the next LP. [Pg.296]

In this chapter, we discuss solution approaches for MILP and MINLP that are capable of finding an optimal solution and verify that they have done so. Specifically, we consider branch-and-bound (BB) and outer linearization (OL) methods. BB can be applied to both linear and nonlinear problems, but OL is used for nonlinear problems by solving a sequence of MILPs. Chapter 10 further considers branch-and-bound methods, and also describes heuristic methods, which often find very good solutions but are unable to verify optimality. [Pg.354]

Branch and bound (BB) is a class of methods for linear and nonlinear mixed-integer programming. If carried to completion, it is guaranteed to find an optimal solution to linear and convex nonlinear problems. It is the most popular approach and is currently used in virtually all commercial MILP software (see Chapter 7). [Pg.354]

Node 1. The first step is to set up and solve the relaxation of the binary IP via LP. The optimal solution has one fractional (noninteger) variable (y2) and an objective function value of 129.1. Because the feasible region of the relaxed problem includes the feasible region of the initial IP problem, 129.1 is an upper bound on the value of the objective function of the IP. If we knew a feasible binary solution, its objective value would be a lower bound on the value of the objective function, but none is assumed here, so the lower bound is set to -< >. There is as yet no incumbent, which is the best feasible integer solution found thus far. [Pg.355]

A BB tree for this problem is in Figure E9.2b. The numbers to the left of each node are the current upper and lower bounds on the objective function, and the values to the right are the (y1 y2) values in the optimal solution to the LP relaxation at the node. The solution at node 1 has yx fractional, so we branch on y, leading to nodes 2 and 3. If node 2 is evaluated first, its solution is an integer, so the node is fathomed, and (2, 5) becomes the incumbent solution. This solution is optimal, but we do not... [Pg.358]

We redefined the sense of the optimization to be maximization. The optimal objective value of this problem is a lower bound on the MINLP optimal value. The MILP subproblem involves both the x and y variables. At iteration k, it is formed by linearizing all nonlinear functions about the optimal solutions of each of the subproblems NLP (y ),/ = 1,. .., , and keeping all of these linearizations. If x solves NLP(yl), the MILP subproblem at iteration k is... [Pg.369]

Table 9.1 shows how outer approximation, as implemented in the DICOPT software, performs when applied to the process selection model in Example 9.3. Note that this model does not satisfy the convexity assumptions because its equality constraints are nonlinear. Still DICOPT does find the optimal solution at iteration 3. Note, however, that the optimal MILP objective value at iteration 3 is 1.446, which is not an upper bound on the optimal MINLP value of 1.923 because the convexity conditions are violated. Hence the normal termination condition that the difference between upper and lower bounds be less than some tolerance cannot be used, and DICOPT may fail to find an optimal solution. Computational experience on nonconvex problems has shown that retaining the best feasible solution found thus far, and stopping when the objective value of the NLP subproblem fails to improve, often leads to an optimal solution. DICOPT stopped in this example because the NLP solution at iteration 4 is worse (lower) than that at iteration 3. [Pg.370]

From the optimal solution, it can be seen that a long reactor is preferred. The feed gas temperature is at its lower bound. These conditions result in an... [Pg.231]

Therefore, for large optimal control problems, the efficient exploitation of the structure (to obtain 0(NE) algorithms) still remains an unsolved problem. As seen above, the structure of the problem can be complicated greatly by general inequality constraints. Moreover, the number of these constraints will also grow linearly with the number of elements. One can, in fact, formulate an infinite number of constraints for these problems to keep the profiles bounded. Of course, only a small number will be active at the optimal solution thus, adaptive constraint addition algorithms can be constructed for selecting active constraints. [Pg.249]

Here we present a proof of the SAA bound on the true optimal solution of the stochastic problem. The proof is rather intuitive for the upper bound since it starts from a feasible solution. However, the lower bound proof is more involved and is as... [Pg.188]

Remark 3 The selection of a relaxation among a number of alternatives is based on the trade-off between two competing criteria. The first criterion corresponds to the capability of solving the relaxed problem easily. The second criterion is associated with the type and quality of lower bound that the relaxed problem produces for problem (P). In general, the easier the relaxed problem (RP) is to solve, the greater the gap is between the optimal solution of (P) and the lower bound provided by (RP). [Pg.99]

The basic ideas in a branch and bound algorithm are outlined in the following. First we make a reasonable effort to solve the original problem (e.g., considering a relaxation of it). If the relaxation does not result in a 0 - 1 solution for the y-variables, then we separate the root node into two or more candidate subproblems at level 1 and create a list of candidate subproblems. We select one of the candidate subproblems of level 1, we attempt to solve it, and if its solution is integral, then we return to the candidate list of subproblems and select a new candidate subproblem. Otherwise, we separate the candidate subproblem into two or more subproblems at level 2 and add its children nodes to the list of candidate subproblems. We continue this procedure until the candidate list is exhausted and report as optimal solution the current incumbent. Note that the finite termination of such a procedure is attained if the set of feasible solutions of the original problem (P), denoted as FS(P) is finite. [Pg.101]

Note that Z(rcs)° >s a lower bound on the optimal solution of the MILP model of (1). Also note that if the solution of (RCS)° turns out to have all the y-variables at 0 - 1 values, then we can terminate since the LP relaxation satisfies the integrality conditions. [Pg.104]


See other pages where Solutions optimal bounding is mentioned: [Pg.188]    [Pg.187]    [Pg.188]    [Pg.187]    [Pg.87]    [Pg.113]    [Pg.66]    [Pg.67]    [Pg.198]    [Pg.14]    [Pg.239]    [Pg.296]    [Pg.298]    [Pg.382]    [Pg.389]    [Pg.390]    [Pg.116]    [Pg.29]    [Pg.236]    [Pg.64]    [Pg.65]    [Pg.115]    [Pg.141]    [Pg.336]   
See also in sourсe #XX -- [ Pg.187 ]

See also in sourсe #XX -- [ Pg.187 ]




SEARCH



Bound solutions

Bounding optimal

Optimization optimal solution

Sample optimal solution bounding

© 2024 chempedia.info