Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization upper bounds

In equation (9.131), sup is short for supremum, which means the final result is the least upper bound. Thus the //qo-optimal controller minimizes the maximum magnitude of the weighted sensitivity function over frequency range uj, or in mathematical terms, minimizes the oo-norm of the sensitivity function weighted by fE(jtj). [Pg.306]

Conversely, the optimization is now constrained to be at a fixed (optimized) temperature, but the chlorine addition profile optimized. Both the feed addition profile and the total chlorine feed are optimized. The optimum temperature reaches its upper bound of 150°C. Chlorine addition is 75.0 kmol and the batch cycle is 1.35 h. The resulting fractional yield of MBA from BA now reaches 97.4%. [Pg.296]

The LP solutions in the nodes control the sequence in which the nodes are visited and provide conservative lower bounds (in case of minimization problems) with respect to the objective on the subsequent subproblems. If this lower bound is higher than the objective of the best feasible solution found so far, the subsequent nodes can be excluded from the search without excluding the optimal solution. Each feasible solution corresponds to a leaf node and provides a conservative upper bound on the optimal solution. This combination of branching and bounding or cutting steps leads to the implicit enumeration of all integer solutions without having to visit all leaf nodes. [Pg.157]

Plugging the first-stage solution of the EV problem xEV into the stochastic program (2S-MILP) gives the expected result of using the EV solution (EEV problem). The solution of the EEV problem is not necessarily optimal for the original 2S-MILP. Consequently, the optimal objective value of the EEV problem is always greater than (or at least equal to) the optimal objective value of the 2S-MILP, such that the objective of EEV is an upper bound for the optimal solution of the 2S-MILP ... [Pg.198]

Results 1 and 2 imply that, in searching for an optimal solution, we need only consider vertices, hence only basic feasible solutions. Because a basic feasible solution has m basic variables, an upper bound to the number of basic feasible solutions is the number of ways m variables can be selected from a group of n variables, which is... [Pg.229]

Node 1. The first step is to set up and solve the relaxation of the binary IP via LP. The optimal solution has one fractional (noninteger) variable (y2) and an objective function value of 129.1. Because the feasible region of the relaxed problem includes the feasible region of the initial IP problem, 129.1 is an upper bound on the value of the objective function of the IP. If we knew a feasible binary solution, its objective value would be a lower bound on the value of the objective function, but none is assumed here, so the lower bound is set to -< >. There is as yet no incumbent, which is the best feasible integer solution found thus far. [Pg.355]

Table 9.1 shows how outer approximation, as implemented in the DICOPT software, performs when applied to the process selection model in Example 9.3. Note that this model does not satisfy the convexity assumptions because its equality constraints are nonlinear. Still DICOPT does find the optimal solution at iteration 3. Note, however, that the optimal MILP objective value at iteration 3 is 1.446, which is not an upper bound on the optimal MINLP value of 1.923 because the convexity conditions are violated. Hence the normal termination condition that the difference between upper and lower bounds be less than some tolerance cannot be used, and DICOPT may fail to find an optimal solution. Computational experience on nonconvex problems has shown that retaining the best feasible solution found thus far, and stopping when the objective value of the NLP subproblem fails to improve, often leads to an optimal solution. DICOPT stopped in this example because the NLP solution at iteration 4 is worse (lower) than that at iteration 3. [Pg.370]

As the decision tree descends, the solution at each node becomes more and more constrained, until node r is reached, in which the upper bound and the lower bound for the number of compressors in each pipeline branch are the same. The solution at node r is feasible for the general problem but not necessarily optimal. Nevertheless, the important point is that the solution at node r is an upper bound on the solution of the general problem. [Pg.475]

The optimal energy for intra-orbit variation is attained at < [pop,(r) y ]. Because the functional N-representability condition is fulfilled, this value is an upper bound to (i.e., to the optimal energy within the Hohenberg-Kohn orbit as W/Hn, C ] =... [Pg.207]

Here we present a proof of the SAA bound on the true optimal solution of the stochastic problem. The proof is rather intuitive for the upper bound since it starts from a feasible solution. However, the lower bound proof is more involved and is as... [Pg.188]

In the other cases discussed above, the optimal catalyst is relatively close to the narrow region of dissociative chemisorption energies from —2 to — leV. It does, however, appear that the models developed so far could also have a problem describing why some high temperature and very exothermic reactions (with corresponding small approaches to equilibrium) also lie within the narrow window of chemisorption energies. To remove these discrepancies we shall relax the assumption of one rate-determining step, but retain an analytic model, by use of a least upper bound approach. [Pg.304]

Remark 2 Any feasible solution of the primal problem (P) represents an upper bound on the optimal value of the dual problem ( >). [Pg.82]

Remark 3 This lower-upper bound feature between the dual and primal problems is very important in establishing termination criteria in computational algorithms. In particular applications, if at some iteration feasible solutions exist for both the primal and the dual problems and are close to each other in value, then they can be considered as being practically optimal for the problem under consideration. [Pg.82]

Remark 4 This important lower-upper bound result for the dual-primal problems that is provided by the weak duality theorem, is not based on any convexity assumption. Hence, it is of great use for nonconvex optimization problems as long as the dual problem can be solved efficiently. [Pg.83]

At level 1, the lower and upper bounds for the depth first search are (—6.667, +oo) while for the breadth first search are (-6.667, -5). At level 2, the lower and upper bounds for the depth first search are (-6.667, +oo) while for the breadth first search are (-6.667, -6). At level 3, the lower and upper bounds for the depth first search are (-6.5, -5) while for the breadth first search we have reached termination at -6 since there are no other candidate subproblems in the list. When the backtracking begins for the depth first search we find in node 5 the upper bound of -6, subsequently we check node 6 and terminate with the least upper bound of -6 as the optimal solution. [Pg.105]

If the primal problem at iteration k is feasible, then its solution provides information on xk, f(xk, yk ), which is the upper bound, and the optimal multiplier vectors k, for the equality and inequality constraints. Subsequently, using this information we can formulate the Lagrange function as... [Pg.116]


See other pages where Optimization upper bounds is mentioned: [Pg.41]    [Pg.174]    [Pg.41]    [Pg.174]    [Pg.281]    [Pg.60]    [Pg.133]    [Pg.343]    [Pg.229]    [Pg.74]    [Pg.89]    [Pg.193]    [Pg.51]    [Pg.296]    [Pg.203]    [Pg.66]    [Pg.66]    [Pg.68]    [Pg.402]    [Pg.362]    [Pg.369]    [Pg.385]    [Pg.390]    [Pg.498]    [Pg.507]    [Pg.289]    [Pg.419]    [Pg.199]    [Pg.200]    [Pg.236]    [Pg.179]    [Pg.179]    [Pg.211]    [Pg.446]    [Pg.142]    [Pg.40]    [Pg.310]   
See also in sourсe #XX -- [ Pg.12 ]

See also in sourсe #XX -- [ Pg.12 ]




SEARCH



Bounding optimal

© 2024 chempedia.info