Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gradients reduced gradient

This method of optimization is known as the generalized reduced-gradient (GRG) method. The objective function and the constraints are linearized ia a piecewise fashioa so that a series of straight-line segments are used to approximate them. Many computer codes are available for these methods. Two widely used ones are GRGA code (49) and GRG2 code (50). [Pg.79]

The following strategies are all examples of Generahzed Reduced Gradient (GR< methods. [Pg.485]

The calculations begin with given values for the independent variables u and exit with the (constrained) derivatives of the objective function with respec t to them. Use the routine described above for the unconstrained problem where a succession of quadratic fits is used to move toward the optimal point for an unconstrained problem. This approach is a form or the generahzed reduced gradient (GRG) approach to optimizing, one of the better ways to cany out optimization numerically. [Pg.486]

The relationship between film thickness of hexadecane with the addition of cholesteryl LCs and rolling speed under different pressures is shown in Fig. 25 [50], where the straight line is the theoretic film thickness calculated from the Hamrock-Dowson formula based on the bulk viscosity under the pressure of 0.174 GPa. It can be seen that for all lubricants, when speed is high, it is in the EHL regime and a speed index 4> about 0.67 is produced. When the rolling speed decreases and the film thickness falls to about 30 nm, the static adsorption film and ordered fluid film cannot be negligible, and the gradient reduces to less than 0.67 and the transition from EHL to TFL occurs. For pure hexadecane, due to the weak interaction between hexadecane molecules and metal surfaces, the static and ordered films are very thin. EHL... [Pg.45]

Generalized reduced gradient with branch and bound Bickel et al. (B7)... [Pg.172]

A generalization of this method, known as the generalized reduced gradient (GRG) method, is treated by Himmelblau (H4) and discussed in Section IV,B,3. [Pg.175]

The minimal cost design problem formulated above was solved by Bickel et al. (B7) using the generalized reduced gradient (GRG) method of Abadie and Guigou (H4). If x and u are vectors of state and decision (independent) variables and u) is the objective function in a minimization subject to constraints [Eq. (90)], then the reduced gradient d/du is given by... [Pg.183]

Initial search is in the direction of steepest descent given by the reduced gradient, z0 say. Subsequent search directions sk+, are generated by a conjugate direction formula (F6),... [Pg.183]

MINOS (Murtagh and Saunders, Technical Report SOL 83-20R, Stanford University, 1987) is a well-implemented package that offers a variation on reduced gradient strategies. At iteration k, Eq. (3-105d) is replaced by its linearization... [Pg.64]

CONOPT (Drud, 1994) Reduced gradient Line search Exact and quasi-Newton... [Pg.65]

MINOS Reduced gradient, augmented lagrangian Line search Quasi-Newton... [Pg.65]

The MINLP-problems were implemented in GAMS [7, 8] and solved by the outer approximation/equality relaxation/augmented penalty-method [9] as implemented in DICOPT. The algorithm generates a series of NLP and MILP subproblems, which were solved by the generalized reduced gradient method [10] as implemented in CONOPT and the integrality relaxation based branch and cut method as... [Pg.155]

In many cases the equality constraints may be used to eliminate some of the variables, leaving a problem with only inequality constraints and fewer variables. Even if the equalities are difficult to solve analytically, it may still be worthwhile solving them numerically. This is the approach taken by the generalized reduced gradient method, which is described in Section 8.7. [Pg.126]

The constraint in the original problem has now been eliminated, and fix2) is an unconstrained function with 1 degree of freedom (one independent variable). Using constraints to eliminate variables is the main idea of the generalized reduced gradient method, as discussed in Section 8.7. [Pg.265]

If dof(x) = n — act(x) = d > 0, then there are more problem variables than active constraints at x, so the (n-d) active constraints can be solved for n — d dependent or basic variables, each of which depends on the remaining d independent or nonbasic variables. Generalized reduced gradient (GRG) algorithms use the active constraints at a point to solve for an equal number of dependent or basic variables in terms of the remaining independent ones, as does the simplex method for LPs. [Pg.295]

The generalized reduced gradient (GRG) algorithm was first developed in the late 1960s by Jean Abadie (Abadie and Carpentier, 1969) and has since been refined by several other researchers. In this section we discuss the fundamental concepts of GRG and describe the version of GRG that is implemented in GRG2, the most widely available nonlinear optimizer [Lasdon et al., 1978 Lasdon and Waren, 1978 Smith and Lasdon, 1992]. [Pg.306]

Because the reduced problem is unconstrained and quite Simple, it can be solved either analytically or by the iterative descent algorithm described earlier. First, let us solve the problem analytically. We set the gradient of F(y), called the reduced gradient, to zero giving ... [Pg.308]

To solve this first reduced problem, follow the steps of the descent algorithm outlined at the start of this section with some straightforward modifications that account for the bounds on x and y. When a nonbasic variable is at a bound, we must decide whether it should be allowed to leave the bound or be forced to remain at that bound for the next iteration. Those nonbasic variables that will not be kept at their bounds are called superbasic variables [this term was coined by Murtaugh and Saunders (1982)]. In step 1 the reduced gradient off(x,y) is... [Pg.310]

The variable y becomes superbasic. Because s is at its lower bound of zero, consider whether s should be allowed to leave its bound, that is, be a superbasic variable. Because its reduced gradient term is , increasing s (which is the only feasible change for s) increases the objective value. Because we are minimizing F, fix s at zero this corresponds to staying on the line x = y. The search direction d = and new values for y are generated from... [Pg.311]

First, the first element in the reduced gradient with respect to the superbasic variable y is zero. Second, because the reduced gradient (the derivative with respect to s) is 1, increasing s (the only feasible change to s) causes an increase in the objective value. These are the two necessary conditions for optimality for this reduced problem and the algorithm terminates at (1.5, 1.5) with an objective value of 2.0. [Pg.312]

GRG determines a l w value for y as before, by choosing a search direction d and then a step size a. Because this is the first iteration for the current reduced problem, the direction d is the negative reduced gradient. The line search subroutine in GRG chooses an initial value for a. At (0.697, 1.517), d = 1.508 and the initial value for a is 0.050. Thus the first new value for y, say y, is... [Pg.313]

At the initial point, the preceding nonlinear constraint is inactive, the reduced objective is just sinf xy y), and the reduced gradient is... [Pg.317]

The initial search direction is, as usual, the negative reduced gradient direction so d = [0 2] and we move from (0.75,0) straight up toward the line x + y = 1. The output from GRG is shown in the following box. [Pg.317]

GRG2. This code is presently the most widely distributed for the generalized reduced gradient and its operation is explained in Section 8.7. In addition to its use as a stand-alone system, it is the optimizer employed by the Solver optimization options within the spreadsheet programs Microsoft Excel, Novell s Quattro Pro, Lotus 1-2-3, and the GINO interactive solver. [Pg.320]

Abadie, J. and J. Carpentier. Generalization of the Wolfe Reduced Gradient Method to the Case of Nonlinear Constraints. In Optimization, R. Fletcher, ed. Academic Press, New York (1969), pp. 37-47. [Pg.328]


See other pages where Gradients reduced gradient is mentioned: [Pg.745]    [Pg.33]    [Pg.350]    [Pg.131]    [Pg.318]    [Pg.736]    [Pg.94]    [Pg.234]    [Pg.204]    [Pg.64]    [Pg.64]    [Pg.64]    [Pg.64]    [Pg.156]    [Pg.156]    [Pg.106]    [Pg.248]    [Pg.264]    [Pg.267]    [Pg.306]    [Pg.308]    [Pg.310]    [Pg.311]    [Pg.311]   
See also in sourсe #XX -- [ Pg.681 ]




SEARCH



Reduced gradient

© 2024 chempedia.info