Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Subgradient

The weighted mean curvature is the interface divergence of the evaluated on the unit sphere. The interface divergence is defined within the interface, and if the interface is not differentiable, subgradients must be used. The convex portion of is equivalent to the the Wulff shape, so the interface divergence is operating from one interface onto another. This form can get very complicated. [Pg.611]

This section presents (i) the definitions and properties of convex and concave functions, (ii) the definitions of continuity, semicontinuity and subgradients, (iii) the definitions and properties of differentiable convex and concave functions, and (iv) the definitions and properties of local and global extremum points. [Pg.24]

Remark 1 The right-hand side of the above inequality (2.1) is a linear function in x and represents the first-order Taylor expansion of f(x) around x° using the vector d instead of the gradient vector of /(x) at x°. Hence, d is a subgradient of /(x) at x° if and only if the first-order Taylor approximation always provides an underestimation of /(x) for all x. [Pg.30]

Remark 5 This property provides a sufficient condition for a subgradient. [Pg.79]

Remark 6 The geometrical interpretation of the primal and dual problems clarifies the weak and strong duality theorems. More specifically, in the vicinity of y — 0, the perturbation function v(y) becomes the 23-ordinate of the image set I when zi and z2 equal y. In Figure 4.1, this ordinate does not decrease infinitely steeply as y deviates from zero. The slope of the supporting hyperplane to the image set I at the point P, (-pi, -p2), corresponds to the subgradient of the perturbation function u(y) at y = 0. [Pg.84]

Remark 7 An instance of unstable problem (P) is shown in Figure 4.2 The image set I is tangent to the ordinate 23 at the point P. In this case, the supporting hyperplane is perpendicular, and the value of the perturbation function v(y) decreases infinitely steeply as y begins to increase above zero. Hence, there does not exist a subgradient at y = 0. In this case, the strong duality theorem does not hold, while the weak duality theorem holds. [Pg.84]

If the solution can be improved and the limit on the number of Lagrangian iterations is not reached, compute new costs using subgradient optimization or some other methodology and go to step 2. [Pg.809]

A Lagrangian relaxation heuristic algorithm that solves the Lagrangian dual problem (35)-(37) is presented next. It uses subgradient optimization to compute the Lagrange multipliers A. [Pg.811]

Compute new values of the Lagrange multipliers A using subgradient optimization and go to... [Pg.811]

A subgradient optimization algorithm (Ahuja et al. 1993 Crowder 1976) is used in step 5 to compute an improv Lagrange multiplier vector and is described below. [Pg.811]

Given an initial Lagrange multiplier vector A°, the subgradient optimization algorithm generates a sequence of vectors A , A, A, ... If A is the Lagrange multiplier already obtained, A is generated by the rule... [Pg.811]

Crowder, H. (1976), Computational Improvements for Subgradient Optimization, Symposia Math-ematica, Vol. 19, pp. 357-372. [Pg.823]

Nonsmooth or nondifferentiable optimization plays an important role in large-scale programming and addresses mathematical programming problems in which the functions involved have discontinuous first derivatives. Thus, classical methods that rely on gradient information fail to solve these problems, and alternative nonstandard approaches must be used. These alternative methods include subgradient methods and bundle methods. The interested reader is referred to Shor (1985), Zowe (1985), and Fletcher (1987, pp. 357 14). [Pg.2562]

The simplest, subgradient search, is effective in many settings. Assume the given discrete problem has been expressed as a minimize (JLP) with aU inequalities <, /= the collection of dualized equality row numbers, I the collection of dualized ( ) inequalities, and u i G / U / the Lagrange multipliers. Then subgradient search updates multipliers ... [Pg.2589]

Finding the p that solves minp>o Vp N) can be accomplished using the subgradient algorithm. Suppose the value of the Lagrange multiplier p at iteration... [Pg.273]

To avoid this exposure problem we could, as is described by de Vries et al. [25], apply the subgradient algorithm to a stronger formulation, say, CAPS. We re-lax the constraints y(S,j) < Letpj S) > 0 be the corresponding... [Pg.275]

Not always. Movement along a subgradient might require that a price be decreased. The price decrease step is sometimes omitted or appears indirectly when bidders are allowed to withdraw bids. [Pg.287]

The variables n x,t) are the dual prices associated with the nonoverlapping constraints (for the position (x, t) in space). With fixed r(x, t), the inner optimization problem is trivial. We use a version of subgradient algorithm, toown as the volume algorithm in the literature (cf Barahona and Anbil (2000)), to update the dual prices and to solve the above problem. To make the problem manageable so that the lower bound can be obtained within reasonable time, the space-time network is also discretized to moderate sizes. Note that the discretized version of the problem will still provide us with a lower bound for the static berth plaiming problem. [Pg.93]

F. Barahona and R. Anbil (2000). The volume algorithm producing primal solutions using a subgradient msthod. Mathematical Programming, 87, 385-399. [Pg.103]


See other pages where Subgradient is mentioned: [Pg.30]    [Pg.30]    [Pg.30]    [Pg.30]    [Pg.30]    [Pg.30]    [Pg.31]    [Pg.76]    [Pg.76]    [Pg.77]    [Pg.79]    [Pg.79]    [Pg.79]    [Pg.83]    [Pg.338]    [Pg.306]    [Pg.787]    [Pg.809]    [Pg.811]    [Pg.2756]    [Pg.2790]    [Pg.266]    [Pg.274]    [Pg.274]    [Pg.274]    [Pg.275]    [Pg.275]    [Pg.275]    [Pg.276]   
See also in sourсe #XX -- [ Pg.24 , Pg.30 , Pg.31 , Pg.41 , Pg.76 , Pg.77 , Pg.79 , Pg.83 , Pg.84 ]




SEARCH



Directional Derivative and Subgradients

© 2024 chempedia.info