Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization concavity

The addition of inequality constraints complicates the optimization. These inequality constraints can form convex or nonconvex regions. If the region is nonconvex, then this means that the search can be attracted to a local optimum, even if the objective function is convex in the case of a minimization problem or concave in the case of a maximization problem. In the case that a set of inequality constraints is linear, the resulting region is always convex. [Pg.54]

Since both spot price and quantity are modeled as variables, the resulting optimization problem of maximizing turnover is quadratic. In the following, we show how a linear approximation of the turnover function can be achieved (see also Habla 2006). This approach is based on the concavity property of the turnover function and the limited region of sales quantity flexibility to be considered. Approximation parameters are determined in a preprocessing phase based on the sales input and control data. The preprocessing is structured in two phases as shown in table 25 ... [Pg.162]

This chapter discusses the elements of convex analysis which are very important in the study of optimization problems. In section 2.1 the fundamentals of convex sets are discussed. In section 2.2 the subject of convex and concave functions is presented, while in section 2.3 generalizations of convex and concave functions are outlined. [Pg.17]

Part 1, comprised of three chapters, focuses on the fundamentals of convex analysis and nonlinear optimization. Chapter 2 discusses the key elements of convex analysis (i.e., convex sets, convex and concave functions, and generalizations of convex and concave functions), which are very important in the study of nonlinear optimization problems. Chapter 3 presents the first and second order optimality conditions for unconstrained and constrained nonlinear optimization. Chapter 4 introduces the basics of duality theory (i.e., the primal problem, the perturbation function, and the dual problem) and presents the weak and strong duality theorem along with the duality gap. Part 1 outlines the basic notions of nonlinear optimization and prepares the reader for Part 2. [Pg.466]

Unconstrained optimization deals with situations where the constraints can be eliminated from the problem by substitution directly into the objective function. Many optimization techniques rely on the solution of unconstrained subproblems. The concepts of convexity and concavity will be introduced in this subsection, as well as discussing unimodal versus multimodal functions, singlevariable optimization techniques, and examining multi-variable techniques. [Pg.135]

In eqns.(6.1) and (6.1a) a, b9 c and d are constants. Because

concave gradient (figure 6.2d) is optimal for LSC. [Pg.263]

The pattern of the variation of retention with composition in LC is affected by the choice of both the stationary and the mobile phase. The optimum shape of the gradient for unknown wide range samples is dictated by the phase system. Linear or slightly convex gradients are optimal for RPLC. Concave gradients are optimal for LSC. [Pg.266]

The combination of these two factors determines the required shape of an LSS gradient. Linear gradients were shown to result for RPLC in section 5.4, whereas a concave gradient was found to be optimal for LSC in section 6.2.2. [Pg.279]

A strategy for the optimization of gradient programs based on the actual retention behaviour of some sample components has been described by Jandera and Chura5ek [623, 624]. This approach relies on the possibility to calculate retention and resolution under gradient conditions from known retention vs. composition relationships and plate numbers. Both typical RPLC (eqn.3.45) and LSC (eqn.3.74) relationships can be accommodated in the calculations and linear, convex and concave gradients are all possible because of the use of a flexible equation to describe the gradient function. This equation reads... [Pg.281]

The method of coordinatewise optimization was proposed for simultaneous choice of flow rates and pressure losses on the closed redundant schemes (Merenkov and Khasilev, 1985 Merenkov et al., 1992 Sumaro-kov, 1976). According to this method motion to the minimum point of the economic functional F(x, Pbr) is performed alternately along the concave (F(x)) and convex (F(Pbr)) directions. The convex problem is solved by the dynamic programming method and the concave one reduces to calculation of flow distribution. The pressure losses in this case are optimized on the tree obtained as a result of assumed flow shutoff at the end points of some branches. The concave problem is solved on the basis of entropy... [Pg.45]

An optimum of flow profile has recently been achieved for capillary electrophoresis [76], when the mobile phase migration is done by electroosmosis. It is the situation that has been utilised for electrochromatography. For planar chromatography, the optimum of the linear flow velocity is approximated when the convex shape of a forced-flow profile chiefly counterbalances the concave profile of the advancing meniscus, it is possible to reach optimal efficiency as a function of linear flow velocity [67]. This is demonstrated in Fig. 10.6. At the optimum of efficiency, the microflow profile is nearly linear as the convex and concave forms of laminar flow and the concave form of the advancing meniscus counterbalance each other (Fig. 10.7). [Pg.472]

The profit changes significantly as values of the optimization variables are changed. Naturally, no optimization is justified if the values of the optimization variables do not significantly influence profit. For the boiler example discussed above, similar boilers could have essentially the same efficiency relationship with the rate of steam production. If these efficiency curves were concave (as they usually are) optimum profit would be achieved at equal production by every boiler. [Pg.2587]

Haines et al. (47) suggested including the criterion Bayesian D-optimality, which maximizes some concave function of the information matrix, which in essence is the minimization of the generalized variance of the maximum likelihood estimators of the two parameters of the logistic regression. The authors underline that toxicity is recorded as an ordinal variable and not a simple binary variable, and that the present design needs to be extended to proportional odds models. [Pg.792]

Problem ZDT2 has a concave Pareto optimal front. The problem is presented in Eq. (5.13). [Pg.143]


See other pages where Optimization concavity is mentioned: [Pg.29]    [Pg.180]    [Pg.761]    [Pg.37]    [Pg.37]    [Pg.54]    [Pg.205]    [Pg.219]    [Pg.88]    [Pg.182]    [Pg.598]    [Pg.384]    [Pg.120]    [Pg.27]    [Pg.105]    [Pg.13]    [Pg.408]    [Pg.124]    [Pg.205]    [Pg.1]    [Pg.33]    [Pg.7]    [Pg.73]    [Pg.168]    [Pg.296]    [Pg.235]    [Pg.408]    [Pg.941]    [Pg.431]    [Pg.263]    [Pg.762]    [Pg.765]    [Pg.770]    [Pg.237]    [Pg.1131]    [Pg.167]    [Pg.190]   
See also in sourсe #XX -- [ Pg.37 , Pg.47 ]




SEARCH



Concave

Concavity

© 2024 chempedia.info