Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regions feasible

As the goal is to minimize the objective function, releasing the constraint into the feasible region must not decrease the objective function. Using the shadow price argument above, it is evident that the multipher must be nonnegative (Ref. 177). [Pg.485]

The foregoing provides background material for basic ideas for the sequel. Of essence is the observation that with each iteration the value of the objective function improves towards the optimum value or, at worst, remains the same for iterations in the region that satisfies all the constraints. This region is called the feasible region. [Pg.293]

Now consider the influence of the inequality constraints on the optimization problem. The effect of inequality constraints is to reduce the size of the solution space that must be searched. However, the way in which the constraints bound the feasible region is important. Figure 3.10 illustrates the concept of convex and nonconvex regions. [Pg.42]

It is also worth noting that the stochastic optimization methods described previously are readily adapted to the inclusion of constraints. For example, in simulated annealing, if a move suggested at random takes the solution outside of the feasible region, then the algorithm can be constrained to prevent this by simply setting the probability of that move to 0. [Pg.43]

Whilst Example 3.1 is an extremely simple example, it illustrates a number of important points. If the optimization problem is completely linear, the solution space is convex and a global optimum solution can be generated. The optimum always occurs at an extreme point, as is illustrated in Figure 3.12. The optimum cannot occur inside the feasible region, it must always be at the boundary. For linear functions, running up the gradient can always increase the objective function until a boundary wall is hit. [Pg.44]

Computationally, the use of pseudocomponents improves the conditioning of the numerical procedures in fitting the mixture model. Graphically, the expansion of the feasible region and the rescaling of the plot axes allow a better visualization of the response contours. [Pg.60]

The perceptional advantages of response contours in illustrating nonlinear blending behavior and the additional information of the experimental boundary locations were incorporated into a generalized algorithm which determines the feasible region on a tricoordinate plot for a normal or pseudocomponent mixture having any number of constrained components. [Pg.60]

Given the components to be plotted and the fixed values of the remaining components, the algorithm first tests the existence of a feasible region by ... [Pg.60]

Example Feasible Region Determination and Rescaling. McLean and Anderson (9) described a mixture experiment in which magnesium (X ), sodium nitrate (X2), strontium nitrate (X3), and binder (X ) were combined and ignited to produce flares varying in intensity. The four components had the following ranges ... [Pg.60]

Initialize algorithm calculate upper and lower bounds over the entire (relaxed) feasible region. [Pg.66]

In general, branch-and-bound [5] is an enumerative search space exploration technique that successively constructs a decision tree. In each node, the feasible region is divided into two or more disjoint subsets which are then assigned to child nodes. During the search space exploration for minimization problems, a lower bound of the objective function is computed in each node and compared against the lowest upper bound found so far. If the lower bound is greater than the upper bound, the corresponding branch is said to be fathomed and not explored anymore. The exploration terminates when a certain gap between the upper and the lower bound is reached or when the all possible subsets have been enumerated. [Pg.198]

The parameter/max denotes a conservative upper bound off (x). For the 2S-MILP it is easily calculated by maximizing the integer relaxation of (DEP). A positive penalty term p(x) is used to measure the amount of infeasibility. This steers the search in infeasible regions towards the feasible region. The penalty for the violation of the first-stage constraints is provided by ... [Pg.205]

Feasible region for an optimization problem involving two independent variables. The dashed lines represent the side of the inequality constraints in the plane that form part of the infeasible region. The heavy line shows the feasible region. [Pg.15]

A vector x is feasible if it satisfies all the constraints. The set of all feasible points is called the feasible region F. If F is empty, the problem is infeasible, and if feasible points exist at which the objective/is arbitrarily large in a max problem or arbitrarily small in a min problem, the problem is unbounded. A point (vector) x is termed a local extremum (minimum) if... [Pg.118]

A typical feasible region for a problem with two variables and the constraints... [Pg.118]

This problem is shown in Figure 4.5. The feasible region is defined by linear constraints with a finite number of comer points. The objective function, being nonlinear, has contours (the concentric circles, level sets) of constant value that are not parallel lines, as would occur if it were linear. The minimum value of/corresponds to the contour of lowest value having at least one point in common with the feasible region, that is, at xx = 2, x2 = 3. This is not an extreme point of the feasible set, although it is a boundary point. For linear programs the minimum is always at an extreme point, as shown in Chapter 7. [Pg.119]

Neither of the problems illustrated in Figures 4.5 and 4.6 had more than one optimum. It is easy, however, to construct nonlinear programs in which local optima occur. For example, if the objective function / had two minima and at least one was interior to the feasible region, then the constrained problem would have two local minima. Contours of such a function are shown in Figure 4.7. Note that the minimum at the boundary point x1 = 3, x2 = 2 is the global minimum at / = 3 the feasible local minimum in the interior of the constraints is at / = 4. [Pg.120]

Although the examples thus far have involved linear constraints, the chief nonlinearity of an optimization problem often appears in the constraints. The feasible region then has curved boundaries. A problem with nonlinear constraints may have local optima, even if the objective function has only one unconstrained optimum. Consider a problem with a quadratic objective function and the feasible region shown in Figure 4.8. The problem has local optima at the two points a and b because no point of the feasible region in the immediate vicinity of either point yields a smaller value of/. [Pg.120]

In summary, the optimum of a nonlinear programming problem is, in general, not at an extreme point of the feasible region and may not even be on the boundary. Also, the problem may have local optima distinct from the global optimum. These properties are direct consequences of nonlinearity. A class of nonlinear problems can be defined, however, that are guaranteed to be free of distinct local optima. They are called convex programming problems and are considered in the following section. [Pg.121]


See other pages where Regions feasible is mentioned: [Pg.486]    [Pg.1353]    [Pg.293]    [Pg.303]    [Pg.231]    [Pg.189]    [Pg.384]    [Pg.42]    [Pg.42]    [Pg.43]    [Pg.45]    [Pg.46]    [Pg.50]    [Pg.525]    [Pg.59]    [Pg.60]    [Pg.61]    [Pg.66]    [Pg.68]    [Pg.60]    [Pg.60]    [Pg.64]    [Pg.66]    [Pg.66]    [Pg.69]    [Pg.156]    [Pg.210]    [Pg.14]    [Pg.119]    [Pg.120]   
See also in sourсe #XX -- [ Pg.119 , Pg.223 ]

See also in sourсe #XX -- [ Pg.35 , Pg.38 ]

See also in sourсe #XX -- [ Pg.36 , Pg.49 ]

See also in sourсe #XX -- [ Pg.156 ]

See also in sourсe #XX -- [ Pg.146 , Pg.160 ]

See also in sourсe #XX -- [ Pg.248 , Pg.252 , Pg.276 , Pg.279 , Pg.286 ]




SEARCH



Feasible

© 2024 chempedia.info