Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Reduced gradient methods

The advantage of the NR method is that the convergence is second-order near a stationary point. If the function only contains tenns up to second-order, the NR step will go to the stationary point in only one iteration. In general the function contains higher-order terms, but the second-order approximation becomes better and better as the stationary point is approached. Sufficiently close to tire stationary point, the gradient is reduced quadratically. This means tlrat if the gradient norm is reduced by a factor of 10 between two iterations, it will go down by a factor of 100 in the next iteration, and a factor of 10 000 in the next ... [Pg.319]

Sparse matrices are ones in which the majority of the elements are zero. If the structure of the matrix is exploited, the solution time on a computer is greatly reduced. See Duff, I. S., J. K. Reid, and A. M. Erisman (eds.), Direct Methods for Sparse Matrices, Clarendon Press, Oxford (1986) Saad, Y., Iterative Methods for Sparse Linear Systems, 2d ed., Society for Industrial and Applied Mathematics, Philadelphia (2003). The conjugate gradient method is one method for solving sparse matrix problems, since it only involves multiplication of a matrix times a vector. Thus the sparseness of the matrix is easy to exploit. The conjugate gradient method is an iterative method that converges for sure in n iterations where the matrix is an n x n matrix. [Pg.42]

The MINLP-problems were implemented in GAMS [7, 8] and solved by the outer approximation/equality relaxation/augmented penalty-method [9] as implemented in DICOPT. The algorithm generates a series of NLP and MILP subproblems, which were solved by the generalized reduced gradient method [10] as implemented in CONOPT and the integrality relaxation based branch and cut method as... [Pg.155]

In many cases the equality constraints may be used to eliminate some of the variables, leaving a problem with only inequality constraints and fewer variables. Even if the equalities are difficult to solve analytically, it may still be worthwhile solving them numerically. This is the approach taken by the generalized reduced gradient method, which is described in Section 8.7. [Pg.126]

The constraint in the original problem has now been eliminated, and fix2) is an unconstrained function with 1 degree of freedom (one independent variable). Using constraints to eliminate variables is the main idea of the generalized reduced gradient method, as discussed in Section 8.7. [Pg.265]

Abadie, J. and J. Carpentier. Generalization of the Wolfe Reduced Gradient Method to the Case of Nonlinear Constraints. In Optimization, R. Fletcher, ed. Academic Press, New York (1969), pp. 37-47. [Pg.328]

Solve the following problems by the generalized reduced-gradient method. Also, count the number of function evaluations, gradient evaluations, constraint evaluations, and evaluations of the gradient of the constraints. [Pg.336]

Contours (the heavy lines) for the objective function of extraction process. Points 1, 2, 3, and 4 indicate the progress of the reduced-gradient method toward the optimum (point 4). [Pg.450]

The nonlinear programming problem based on objective function (/), model equations (b)-(g), and inequality constraints (was solved using the generalized reduced gradient method presented in Chapter 8. See Setalvad and coworkers (1989) for details on the parameter values used in the optimization calculations, the results of which are presented here. [Pg.504]

The solution listed in Table E15.2B was obtained from several nonfeasible starting points, one of which is shown in Table E15.2C, by the generalized reduced gradient method. [Pg.535]

Reduced gradient method. This technique is based on the resolution of a sequence of optimization subproblems for a reduced space of variables. The process constraints are used to solve a set of variables (zd), called basic or dependent, in terms of the others, which are known as nonbasic or independent (zi). Using this categorization of variables, problem (5.3) is transformed into another one of fewer dimensions ... [Pg.104]

Wolfe, P. (1962). The Reduced-Gradient Method. RAND Corporation (unpublished). [Pg.270]

The increments should be less than the diameter of the charge each time to maximize the effect of this method. Although pressure and density gradients are reduced, they are not completely eliminated in this method and are proportional to the number and size of increments used. However, interfaces between the increments have been found to cause initiation problems in some cases. [Pg.167]

A solution of one part of kanamycin B-3 -phosphate, 10 parts by volume of bis(trimethylsilyl)acetamide, 2 parts by volume of trimethylchlorosilane and 0.4 part of triphenylphosphine is heated at 115°C for 30 h. After cooling, the reaction mixture is concentrated under reduced pressure, and to the concentrate is added 100 parts by volume of methanol and 50 parts by volume of water, and then the mixture is stirred for 1 h. Methanol is removed by distillation, and ethyl acetate-soluble portion is removed. The water layer is run onto a column of 60 parts by volume of cation-exchange resin [Amberlite CG-50, NH4+-form], The column is washed with 200 parts by volume of water, and fractionated by linear gradient method with 600 parts by volume of water and 600 parts by volume of 0.5 N-aqueous ammonia, each fraction being 10... [Pg.3259]


See other pages where Reduced gradient methods is mentioned: [Pg.467]    [Pg.322]    [Pg.430]    [Pg.64]    [Pg.64]    [Pg.64]    [Pg.64]    [Pg.259]    [Pg.264]    [Pg.306]    [Pg.337]    [Pg.337]    [Pg.451]    [Pg.658]    [Pg.104]    [Pg.152]    [Pg.247]    [Pg.69]    [Pg.588]    [Pg.583]    [Pg.1213]    [Pg.68]    [Pg.69]    [Pg.387]    [Pg.285]    [Pg.47]    [Pg.50]    [Pg.172]    [Pg.294]    [Pg.4534]    [Pg.423]   
See also in sourсe #XX -- [ Pg.308 ]




SEARCH



Gradient method

Reduced gradient

© 2024 chempedia.info