Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Value Objective Function

Clearly, since giy)/(x), and hence the solution of Yiy) cannot lead to a better objective function value. [Pg.281]

The disadvantage of the lower-bound elimination criterion is that it cannot eliminate solutions with lower bounds better than the optimal solution objective function value. [Pg.281]

LP objective function value approximates the best integer solution value that could be generated by solutions emanating from that node. [Pg.282]

In addition to the elimination of partial solutions on the basis of their lower-bound values, we can provide two mechanisms that operate directly on pairs of partial solutions. These two mechanisms are based on dominance and equivalence conditions. The utility of these conditions comes from the fact that we need not have found a feasible solution to use them, and that the lower-bound values of the eliminated solutions do not have to be higher than the objective function value of the optimal solution. This is particularly important in scheduling problems where one may have a large number of equivalent schedules due to the use of equipment with identical processing characteristics, and many batches with equivalent demands on the available resources. [Pg.282]

The intuitive notion behind a dominance condition, D, is that by comparing certain properties of partial solutions x and y, we will be able to determine that for every solution to the problem y(y) we will be able to find a solution to Yix) which has a better objective function value (Ibaraki, 1977). In the flowshop scheduling problem several dominance conditions, sometimes called elimination criteria, have been developed (Baker, 1975 Szwarc, 1971). We will state only the simplest ... [Pg.282]

Note that the set X% is defined such that it contains all solutions that have not been dominated by any other solution. Property 4 guarantees that we will not miss an element of the optimal set, since the objective function value of X is lower than that of y. [Pg.283]

The final relationship between solution subsets, which we can use to curtail the enumeration of one subset, is an equivalence condition, EQ. Intuitively, equivalence between subsets means that for every solution in one subset, y, we can find a solution in the subset x, which will have the same objective function value, and that plays a similar role in the execution of the algorithm. [Pg.283]

First, let us concentrate on the meaning of Condition-a. Essentially, we are required to ensure that all the solutions, which lie below nodes x and y, have the same objective function value, and that x and y engender the same set of solutions. A problem involving a finite set of objects, each of which must be included in the final solution, is termed a finite-set problem. For finite-set problems, we can be confident that xu> S yfirst step in identifying candidates is to find two solutions jr, y e S that have equal objective function values. We then try to identify their ancestors x, y, such that the set of alphabetic symbols contained in x and y are equal. If in the process of finding this common stem we find that the stem is the same for both x, y, then we have the situation depicted in Fig. 4, and x = y. This is of no value a solution is trivially equivalent to itself. We term this ancestral-equality. So far, all that has been established for x and y is that there exist two... [Pg.295]

Two solutions can have different tails which lead to solutions with equal objective function values but which cannot produce equivalence relations because they have the same ancestor. [Pg.295]

To make the above discussion more concrete, consider the example branching structure of Fig. 3. In this structure, we have identified an x and y, which have the same objective function values, and their ancestors, (x, y) which are characterized by the same set of symbols, (i.e., batches). Furthermore, we can see that the children of x,y do indeed satisfy the requirements of Condition-a and Condition-b and hence, (x, y) would be considered as candidates to develop a new equivalence relationship. If we examine the partial schedules (x, y) as depicted in Fig. 6, our knowledge... [Pg.296]

In an earlier section, we had alluded to the need to stop the reasoning process at some point. The operationality criterion is the formal statement of that need. In most problems we have some understanding of what properties are easy to determine. For example, a property such as the processing time of a batch is normally given to us and hence is determined by a simple database lookup. The optimal solution to a nonlinear program, on the other hand, is not a simple property, and hence we might look for a simpler explanation of why two solutions have equal objective function values. In the case of our branch-and-bound problem, the operationality criterion imposes two requirements ... [Pg.318]

Objective function value 350a 250b 200c... [Pg.57]

The solution obtained from the exact MINLP is not globally optimal. This is due to the fact that the value of the objective function found in the exact solution is not equal to that of the relaxed MILP. The objective function value in the relaxed solution was 1.8602 x 106 c.u., a slight improvement to that found in the exact model. [Pg.137]

The optimal number of time points used in this example is 8. The objective function value of the MILP, from the first step, was 769.3 t, which is not the same as the exact model. This means that the solution found is only locally optimal. [Pg.141]

The schedule shown in Fig. 7.2 produces 461.7 t of effluent and which results in an objective function value of 2.65 x 106 c.u. If wastewater recycle/reuse had not been considered the resulting effluent would have been 34% more, for the same amount of product. The solution given above cannot be seen as a globally optimal solution since the model is an MINLP. One would notice in Fig. 7.2 that storage vessel one only receives water from unit 1 and storage vessel two only receives... [Pg.166]

The resulting number of binary variables for both the MILP and MINLP was 360 with 6 time points. The optimal value of the objective function for both the MILP and MINLP was 3620.52 kg of water, which means that the solution obtained is globally optimal (Gouws et al., 2008). This is a lower objective function value than that achieved in the previous case. In this case an improved savings of 15.3% is achieved. The final solution was attained in 896.37 CPU seconds using the same processor as in the first illustrative example. [Pg.216]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

Due to the campaign structure, the existing decomposition techniques in the SNP optimizer like time decomposition and product decomposition are not applicable. For problems with this structure it is possible to use the resource decomposition in case a good sequence of planning of the campaign resources can be derived. However, in our case, problem instances could be solved without decomposition on a Pentium IV with 2 GHz in one hour to a solution quality of which the objective value deviates at most one percent from the optimal objective function value. [Pg.258]

The last entry in Table 1.1 involves checking the candidate solution to determine that it is indeed optimal. In some problems you can check that the sufficient conditions for an optimum are satisfied. More often, an optimal solution may exist, yet you cannot demonstrate that the sufficient conditions are satisfied. All you can do is show by repetitive numerical calculations that the value of the objective function is superior to all known alternatives. A second consideration is the sensitivity of the optimum to changes in parameters in the problem statement. A sensitivity analysis for the objective function value is important and is illustrated as part of the next example. [Pg.20]

Observe that the objective function value for 20 D < 60 does not vary significantly. However, not all functions behave like C in Equation (d)—some exhibit sharp changes in the objective function near the optimum. [Pg.24]

Many real problems do not satisfy these convexity assumptions. In chemical engineering applications, equality constraints often consist of input-output relations of process units that are often nonlinear. Convexity of the feasible region can only be guaranteed if these constraints are all linear. Also, it is often difficult to tell if an inequality constraint or objective function is convex or not. Hence it is often uncertain if a point satisfying the KTC is a local or global optimum, or even a saddle point. For problems with a few variables we can sometimes find all KTC solutions analytically and pick the one with the best objective function value. Otherwise, most numerical algorithms terminate when the KTC are satisfied to within some tolerance. The user usually specifies two separate tolerances a feasibility tolerance Sjr and an optimality tolerance s0. A point x is feasible to within if... [Pg.281]

The geometry of this problem is shown in Figure 8.11. The linear equality constraint is a straight line, and the contours of constant objective function values are circles centered at the origin. From a geometric point of view, the problem is to find the point on the line that is closest to the origin at x = 0, y = 0. The solution to the problem is at x = 2, y = 2, where the objective function value is 8. [Pg.307]

Node 1. The first step is to set up and solve the relaxation of the binary IP via LP. The optimal solution has one fractional (noninteger) variable (y2) and an objective function value of 129.1. Because the feasible region of the relaxed problem includes the feasible region of the initial IP problem, 129.1 is an upper bound on the value of the objective function of the IP. If we knew a feasible binary solution, its objective value would be a lower bound on the value of the objective function, but none is assumed here, so the lower bound is set to -< >. There is as yet no incumbent, which is the best feasible integer solution found thus far. [Pg.355]

Node 4. Node 4 has an integer solution, with an objective function value of 44, which is smaller than that of the incumbent obtained previously. The incumbent is unchanged, and this node is fathomed. [Pg.357]

Node 5. Node 5 has a fractional solution with an objective function value of 113.81, which is smaller than the lower bound of 126.0. Any successors of this node have objective values less than or equal to 113.81 because their LP relaxations are formed by adding constraints to the current one. Hence we can never find an integer solution with objective value higher than 126.0 by further branching from node 5, so node 5 is fathomed. Because there are no dangling nodes, the problem is solved, with the optimum corresponding to node 2. [Pg.357]

Table 10.8 shows the result of applying the Standard GRG Solver to a two-variable, one-constraint problem called the Branin problem that has three local optima and a global optimum with objective function value of 0.397. The objective function is constructed in three steps ... [Pg.405]

Table 10.10 shows the performance of the evolutionary solver on this problem in eight runs, starting from an initial point of zero. The first seven runs used the iteration limits shown, but the eighth stopped when the default time limit of 100 seconds was reached. For the same number of iterations, different final objective function values are obtained in each run because of the random mechanisms used in the mutation and crossover operations and the randomly chosen initial population. The best value of 811.21 is not obtained in the run that uses the most iterations or computing time, but in the run that was stopped after 10,000 iterations. This final value differs from the true optimal value of 839.11 by 3.32%, a significant difference, and the final values of the decision variables are quite different from the optimal values shown in Table 10.9. [Pg.407]

Table E14.1B lists the optimal solution of this problem obtained using the Excel Solven (case 1). Note that the maximum amount of ethylene is produced. As the ethylene production constraint is relaxed, the objective function value increases. Once the constraint is raised above 90,909 lb/h, the objective function remains constant. Table E14.1B lists the optimal solution of this problem obtained using the Excel Solven (case 1). Note that the maximum amount of ethylene is produced. As the ethylene production constraint is relaxed, the objective function value increases. Once the constraint is raised above 90,909 lb/h, the objective function remains constant.
As was indicated in Section 7.2, the vector of measurement adjustments, e, has a multivariate normal distribution with zero mean and covariance matrix V. Thus, the objective function value of the least square estimation problem (7.21), ofv = eT l> 1 e, has a central chi-square distribution with a number of degrees of freedom equal to the rank of A. [Pg.144]

The preceding results are applied to develop a strategy that allows us to isolate the source of gross errors from a set of constraints and measurements. Different least squares estimation problems are resolved by adding one equation at a time to the set of process constraints. After each incorporation, the least square objective function value is calculated and compared with the critical value. [Pg.145]

Then the values of the test statistic for all combinations are compared with the critical value. The presence of gross errors correspond to the combinations with the low objective function value (ofv). Detailed algorithms for Stages 1 and 2 are included in Appendix B. [Pg.146]

The objective function value is equal for different combinations of gross errors... [Pg.148]


See other pages where Value Objective Function is mentioned: [Pg.279]    [Pg.281]    [Pg.284]    [Pg.504]    [Pg.344]    [Pg.345]    [Pg.55]    [Pg.63]    [Pg.217]    [Pg.246]    [Pg.62]    [Pg.65]    [Pg.326]    [Pg.359]    [Pg.362]    [Pg.402]    [Pg.405]    [Pg.109]    [Pg.149]   
See also in sourсe #XX -- [ Pg.364 ]

See also in sourсe #XX -- [ Pg.362 ]




SEARCH



Object function

Objective function

Value functions

© 2024 chempedia.info