Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonconvex optimization problem

Remark 4 This important lower-upper bound result for the dual-primal problems that is provided by the weak duality theorem, is not based on any convexity assumption. Hence, it is of great use for nonconvex optimization problems as long as the dual problem can be solved efficiently. [Pg.83]

While these optimization-based approaches have yielded very useful results for reactor networks, they have a number of limitations. First, proper problem definition for reactor networks is difficult, given the uncertainties in the process and the need to consider the interaction of other process subsystems. Second, all of the above-mentioned studies formulated nonconvex optimization problems for the optimal network structure and relied on local optimization tools to solve them. As a result, only locally optimal solutions could be guaranteed. Given the likelihood of extreme nonlinear behavior, such as bifurcations and multiple steady states, even locally optimal solutions can be quite poor. In addition, superstructure approaches are usually plagued by the question of completeness of the network, as well as the possibility that a better network may have been overlooked by a limited superstructure. This problem is exacerbated by reaction systems with many networks that have identical performance characteristics. (For instance, a single PFR can be approximated by a large train of CSTRs.) In most cases, the simpler network is clearly more desirable. [Pg.250]

Local Solution of Nonconvex Problem. The nonconvex optimization problem (1) is solved locally within the current variable bounds [xf, xj ]. If the solution is ey-feasible, the upper bound is updated as follows ... [Pg.277]

Both experimental and theoretical methods exist for the prediction of protein structures. In both cases, additional restraints on the molecular system can be derived and used to formulate a nonconvex optimization problem. Here, the traditional unconstrained problem was recast as a constrained global optimization problem and was applied to protein structure prediction using NMR data. Both the formulation and solution approach of this method differ from traditional techniques, which generally rely on the optimization of penalty-type target function using SA/MD protocols. [Pg.359]

Now consider the influence of the inequality constraints on the optimization problem. The effect of inequality constraints is to reduce the size of the solution space that must be searched. However, the way in which the constraints bound the feasible region is important. Figure 3.10 illustrates the concept of convex and nonconvex regions. [Pg.42]

As shown in Fig. 3-53, optimization problems that arise in chemical engineering can be classified in terms of continuous and discrete variables. For the former, nonlinear programming (NLP) problems form the most general case, and widely applied specializations include linear programming (LP) and quadratic programming (QP). An important distinction for NLP is whether the optimization problem is convex or nonconvex. The latter NLP problem may have multiple local optima, and an important question is whether a global solution is required for the NLP. Another important distinction is whether the problem is assumed to be differentiable or not. [Pg.60]

V. Visweswaran. Global optimization of nonconvex, nonlinear problems. PhD thesis, Princeton University, 1995. [Pg.450]

The on-hne optimization problem is nonconvex therefore, guarantees for reaching the global optimum may be hard to obtain. [Pg.184]

Extensive reviews on global optimization can be found in Horst (1990) and Horst and Tuy (1990). In this section we present a summary of a global optimization method that has been developed by Quesada and Grossmann for solving nonconvex NLP problems which have the special structure that they involve linear fractional and bilinear terms. It should be noted that global optinuzation has clearly become one of the new trends in optimization and synthesis, and active workers involved in this area include Floudas and Visweswaran (1990), Swaney (1990), Manousiouthakis and Sourlas (1992), and Sahinidis (1993). [Pg.221]

Consider a nonconvex multi-objective optimization problem minimize /i(x),/2(rr) ... [Pg.187]

With nonlinear DAEs, the problem given in Equations 14.1 through 14.11 is nonconvex and may have multiple optima, so any solution we determine to the dynamic optimization problem is locally optimal. [Pg.545]

C.D. Maranas, IP. Androulakis and C.A. Floudas, A deterministic global optimization approach for the protein folding problem, pp. 133-150 in Global minimization of nonconvex energy functions molecular conformation and protein folding (P. M. Pardalos et al., eds.), Amer. Math. Soc., Providence, RI, 1996. [Pg.223]

Visweswaran, V. and Floudas, C. A. (1990). A global optimization procedure for certain classes of nonconvex NLP s-II. application of theory and test problems. Comput. Chem. Eng, 14(2), 1419-1434. [Pg.15]

The addition of inequality constraints complicates the optimization. These inequality constraints can form convex or nonconvex regions. If the region is nonconvex, then this means that the search can be attracted to a local optimum, even if the objective function is convex in the case of a minimization problem or concave in the case of a maximization problem. In the case that a set of inequality constraints is linear, the resulting region is always convex. [Pg.54]

Table 9.1 shows how outer approximation, as implemented in the DICOPT software, performs when applied to the process selection model in Example 9.3. Note that this model does not satisfy the convexity assumptions because its equality constraints are nonlinear. Still DICOPT does find the optimal solution at iteration 3. Note, however, that the optimal MILP objective value at iteration 3 is 1.446, which is not an upper bound on the optimal MINLP value of 1.923 because the convexity conditions are violated. Hence the normal termination condition that the difference between upper and lower bounds be less than some tolerance cannot be used, and DICOPT may fail to find an optimal solution. Computational experience on nonconvex problems has shown that retaining the best feasible solution found thus far, and stopping when the objective value of the NLP subproblem fails to improve, often leads to an optimal solution. DICOPT stopped in this example because the NLP solution at iteration 4 is worse (lower) than that at iteration 3. [Pg.370]

The methods mentioned earlier are general-purpose procedures, applicable to almost any problem. Many specialized global optimization procedures exist for specific classes of nonconvex problems. See Pinter (1996a) for a brief review and further references. Typical problems are... [Pg.383]

This criterion requires a search through a nonconvex multidimensional conformation space that contains an immense number of minima. Optimization techniques that have been applied to the problem include Monte Carlo methods, simulated annealing, genetic methods, and stochastic search, among others. For reviews of the application of various optimization methods refer to Pardalos et al. (1996), Vasquez et al. (1994), or Schlick et al. (1999). [Pg.496]


See other pages where Nonconvex optimization problem is mentioned: [Pg.115]    [Pg.149]    [Pg.1139]    [Pg.270]    [Pg.115]    [Pg.149]    [Pg.1139]    [Pg.270]    [Pg.149]    [Pg.76]    [Pg.112]    [Pg.336]    [Pg.444]    [Pg.78]    [Pg.162]    [Pg.174]    [Pg.222]    [Pg.563]    [Pg.568]    [Pg.2448]    [Pg.300]    [Pg.105]    [Pg.473]    [Pg.602]    [Pg.296]    [Pg.298]    [Pg.316]    [Pg.359]    [Pg.423]    [Pg.435]    [Pg.438]    [Pg.124]    [Pg.69]    [Pg.66]    [Pg.69]   
See also in sourсe #XX -- [ Pg.115 ]




SEARCH



Nonconvex

Nonconvexities

Optimization problems

© 2024 chempedia.info