Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization reduced gradient algorithm

The generalized reduced gradient (GRG) algorithm was first developed in the late 1960s by Jean Abadie (Abadie and Carpentier, 1969) and has since been refined by several other researchers. In this section we discuss the fundamental concepts of GRG and describe the version of GRG that is implemented in GRG2, the most widely available nonlinear optimizer [Lasdon et al., 1978 Lasdon and Waren, 1978 Smith and Lasdon, 1992]. [Pg.306]

First, the first element in the reduced gradient with respect to the superbasic variable y is zero. Second, because the reduced gradient (the derivative with respect to s) is 1, increasing s (the only feasible change to s) causes an increase in the objective value. These are the two necessary conditions for optimality for this reduced problem and the algorithm terminates at (1.5, 1.5) with an objective value of 2.0. [Pg.312]

Subsequent reduction of J is achieved as follows. Based on the improved controls and final time (namely, u ext and ft,next) Steps 1-3 of Section 7.1.2 (p. 186) are repeated, and VJ is recalculated and utilized to repeat the improvements and reduce J. This iterative procedure continued until the reduction in J becomes insignificant or the norm of VJ becomes negligible. This minimization procedure is known as the gradient algorithm. It affords a simple and effective way to solve a wide range of optimal control problems. [Pg.191]

First, we define the objective function (see Figure 8.22b) such as energy consumption, the number of trays of a column, and the conversion of a reactor. We have up to three optimization algorithms for this case SQR, general reduced gradient, and simultaneous modular SQP. The particularities of the methods can be found elsewhere [44,45]. We define the independent variables and the constraints over variables from the streams or the equipment just by selecting the unit or the stream and the variable and the range of values of operation and an initial value. [Pg.333]

For such applications of classical optimization theory, the data on energy and gradients are so computationally expensive that only the most efficient optimization methods can be considered, no matter how elaborate. The number of quantum chemical wave function calculations must absolutely be minimized for overall efficiency. The computational cost of an update algorithm is always negligible in this context. Data from successive iterative steps should be saved, then used to reduce the total number of steps. Any algorithm dependent on line searches in the parameter hyperspace should be avoided. [Pg.30]


See other pages where Optimization reduced gradient algorithm is mentioned: [Pg.745]    [Pg.265]    [Pg.282]    [Pg.569]    [Pg.246]    [Pg.749]    [Pg.64]    [Pg.308]    [Pg.311]    [Pg.526]    [Pg.247]    [Pg.40]    [Pg.68]    [Pg.76]    [Pg.519]    [Pg.213]    [Pg.614]    [Pg.1346]    [Pg.626]    [Pg.264]    [Pg.120]    [Pg.503]    [Pg.64]    [Pg.28]    [Pg.90]    [Pg.54]    [Pg.46]    [Pg.529]    [Pg.1357]    [Pg.162]    [Pg.107]    [Pg.205]    [Pg.36]    [Pg.572]    [Pg.731]    [Pg.63]    [Pg.157]    [Pg.468]    [Pg.483]    [Pg.173]    [Pg.94]    [Pg.196]    [Pg.96]    [Pg.499]   


SEARCH



Gradient algorithms

Gradients optimization

Optimization algorithms

Reduced gradient

© 2024 chempedia.info