Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Generalized reduced gradient

This method of optimization is known as the generalized reduced-gradient (GRG) method. The objective function and the constraints are linearized ia a piecewise fashioa so that a series of straight-line segments are used to approximate them. Many computer codes are available for these methods. Two widely used ones are GRGA code (49) and GRG2 code (50). [Pg.79]

Generalized reduced gradient with branch and bound Bickel et al. (B7)... [Pg.172]

A generalization of this method, known as the generalized reduced gradient (GRG) method, is treated by Himmelblau (H4) and discussed in Section IV,B,3. [Pg.175]

The minimal cost design problem formulated above was solved by Bickel et al. (B7) using the generalized reduced gradient (GRG) method of Abadie and Guigou (H4). If x and u are vectors of state and decision (independent) variables and u) is the objective function in a minimization subject to constraints [Eq. (90)], then the reduced gradient d/du is given by... [Pg.183]

The MINLP-problems were implemented in GAMS [7, 8] and solved by the outer approximation/equality relaxation/augmented penalty-method [9] as implemented in DICOPT. The algorithm generates a series of NLP and MILP subproblems, which were solved by the generalized reduced gradient method [10] as implemented in CONOPT and the integrality relaxation based branch and cut method as... [Pg.155]

In many cases the equality constraints may be used to eliminate some of the variables, leaving a problem with only inequality constraints and fewer variables. Even if the equalities are difficult to solve analytically, it may still be worthwhile solving them numerically. This is the approach taken by the generalized reduced gradient method, which is described in Section 8.7. [Pg.126]

The constraint in the original problem has now been eliminated, and fix2) is an unconstrained function with 1 degree of freedom (one independent variable). Using constraints to eliminate variables is the main idea of the generalized reduced gradient method, as discussed in Section 8.7. [Pg.265]

If dof(x) = n — act(x) = d > 0, then there are more problem variables than active constraints at x, so the (n-d) active constraints can be solved for n — d dependent or basic variables, each of which depends on the remaining d independent or nonbasic variables. Generalized reduced gradient (GRG) algorithms use the active constraints at a point to solve for an equal number of dependent or basic variables in terms of the remaining independent ones, as does the simplex method for LPs. [Pg.295]

The generalized reduced gradient (GRG) algorithm was first developed in the late 1960s by Jean Abadie (Abadie and Carpentier, 1969) and has since been refined by several other researchers. In this section we discuss the fundamental concepts of GRG and describe the version of GRG that is implemented in GRG2, the most widely available nonlinear optimizer [Lasdon et al., 1978 Lasdon and Waren, 1978 Smith and Lasdon, 1992]. [Pg.306]

GRG2. This code is presently the most widely distributed for the generalized reduced gradient and its operation is explained in Section 8.7. In addition to its use as a stand-alone system, it is the optimizer employed by the Solver optimization options within the spreadsheet programs Microsoft Excel, Novell s Quattro Pro, Lotus 1-2-3, and the GINO interactive solver. [Pg.320]

Lasdon, L. S. and A. D. Waren. Generalized Reduced Gradient Software for Linearly and Nonlinearly Constrained Problems. Design and Implementation of Optimization Software, H. J. Greenberg, ed., Sijthoff and Noordhoff, Holland (1978), pp. 363-397. [Pg.328]

Solve the following problems by the generalized reduced-gradient method. Also, count the number of function evaluations, gradient evaluations, constraint evaluations, and evaluations of the gradient of the constraints. [Pg.336]

The nonlinear programming problem based on objective function (/), model equations (b)-(g), and inequality constraints (was solved using the generalized reduced gradient method presented in Chapter 8. See Setalvad and coworkers (1989) for details on the parameter values used in the optimization calculations, the results of which are presented here. [Pg.504]

The solution listed in Table E15.2B was obtained from several nonfeasible starting points, one of which is shown in Table E15.2C, by the generalized reduced gradient method. [Pg.535]

Lasdon, L. S., and Waren, A. D. (1978). Generalized reduced gradient software for linearly and nonlinearly constrained problems. In Design and Implementation of Optimisation Software (H. Greenberg, ed.), p. 335. Sijthoff, Holland. [Pg.110]

The following strategies are all examples of Generalized Reduced Gradient (GRG) methods. [Pg.312]

The calculations begin with given values for the independent variables u and exit with the (constrained) derivatives of the objective function with respect to them. Use the routine described above for the unconstrained problem where a succession of quadratic fits is used to move toward the optimal point for an unconstrained problem. This approach is a form of the generalized reduced gradient (GRG) approach to optimizing, one of the better ways to carry out optimization numerically. [Pg.313]

In this section, the important aspects of the mathematical basis for optimization methods are described. This will provide the necessary background to understand the most widely used method, LP. Then descriptions of two more effective NLP methods are outlined the generalized reduced gradient method and the successive LP method. Then methods for mixed-integer and multicriteria optimization problems are summarized. [Pg.2442]

There are essentially six types of procedures to solve constrained nonlinear optimization problems. The three methods considered more successful are the successive LP, the successive quadratic programming, and the generalized reduced gradient method. These methods use different strategies but the same information to move from a starting point to the optimum, the first partial derivatives of the economic model, and constraints evaluated at the current point. Successive LP is used in a number of solvers including MINOS. Successive quadratic programming is the method of... [Pg.2445]


See other pages where Generalized reduced gradient is mentioned: [Pg.745]    [Pg.156]    [Pg.156]    [Pg.248]    [Pg.264]    [Pg.267]    [Pg.306]    [Pg.337]    [Pg.337]    [Pg.337]    [Pg.451]    [Pg.524]    [Pg.526]    [Pg.658]    [Pg.40]    [Pg.282]    [Pg.68]    [Pg.69]    [Pg.35]    [Pg.137]    [Pg.7]    [Pg.519]    [Pg.172]    [Pg.569]    [Pg.35]    [Pg.910]    [Pg.2446]    [Pg.2446]   


SEARCH



Reduced gradient

© 2024 chempedia.info