Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Algorithms nonlinear optimization

Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training. Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training.
Zhang, X., 1995. Algorithms for optimal scheduling using nonlinear models. PhD thesis, University of London, London. [Pg.40]

The generalized reduced gradient (GRG) algorithm was first developed in the late 1960s by Jean Abadie (Abadie and Carpentier, 1969) and has since been refined by several other researchers. In this section we discuss the fundamental concepts of GRG and describe the version of GRG that is implemented in GRG2, the most widely available nonlinear optimizer [Lasdon et al., 1978 Lasdon and Waren, 1978 Smith and Lasdon, 1992]. [Pg.306]

Figure E13.4d compares the optimal gas transmission network with the original network. From a nonfeasible starting configuration with 10-mile-long pipeline segments, the nonlinear optimization algorithm reduced the objective function from the first feasible state of 1.399 X 107 dollars/year to 7.289 X 106 dollars/year, a savings of close to 7 million. Of the ten possible compressor stations, only four remained in the final optimal network. Table E13.4a lists the final state of the network. Note that because the suction and discharge pressures for the pipeline segments in branch 2 are identical, compressors 4, 5, 6, and 7 do not exist in the optimal configuration, nor do 9 and 10 in branch 3. Figure E13.4d compares the optimal gas transmission network with the original network. From a nonfeasible starting configuration with 10-mile-long pipeline segments, the nonlinear optimization algorithm reduced the objective function from the first feasible state of 1.399 X 107 dollars/year to 7.289 X 106 dollars/year, a savings of close to 7 million. Of the ten possible compressor stations, only four remained in the final optimal network. Table E13.4a lists the final state of the network. Note that because the suction and discharge pressures for the pipeline segments in branch 2 are identical, compressors 4, 5, 6, and 7 do not exist in the optimal configuration, nor do 9 and 10 in branch 3.
In general, linear functions and correspondingly linear optimization methods can be distinguished from nonlinear optimization problems. The former, being in itself the wide field of linear programming with the predominant Simplex algorithm for routine solution [75] shall be excluded here. [Pg.69]

This alternative is denoted the (55/50) algorithm and it provides the optimum distribution of ratios of Hz (estimatel/f/g such that the number of estimates of Hz that are less than 0.9 Hz are minimal. That is, the (55/50) algorithm best avoids underestimating Hz by more than 10 percent, but at the cost of sometimes allowing larger overestimates of Hz than the (70/30) algorithm. A commercial nonlinear optimization method (Quattro Pro ) was utilized to obtain... [Pg.23]

The optimality conditions discussed in the previous sections formed the theoretical basis for the development of several algorithms for unconstrained and constrained nonlinear optimization problems. In this section, we will provide a brief outline of the different classes of nonlinear multivariable optimization algorithms. [Pg.68]

This chapter introduces the fundamentals of mixed-integer nonlinear optimization. Section 6.1 presents the motivation and the application areas of MINLP models. Section 6.2 presents the mathematical description of MINLP problems, discusses the challenges and computational complexity of MINLP models, and provides an overview of the existing MINLP algorithms. [Pg.211]

MINOPT (Mixed Integer Nonlinear OPTimizer) is written entirely in C and solves MINLP problems by a variety of algorithms that include (i) the Generalized Benders Decomposition GBD, (ii) the Outer Approximation with Equality Relaxation OA/ER, (iii) the Outer Approximation with Equality Relaxation and Augmented Penalty OA/ER/AP, and (iv) the Generalized Cross Decomposition GCD. [Pg.257]

The objective function is nonlinear and nonconvex and hence despite the linear set of constraints the solution of the resulting optimization model is a local optimum. Note that the resulting model is of the MINLP type and can be solved with the algorithms described in the chapter of mixed-integer nonlinear optimization. Yee and Grossmann (1990) used the OA/ER/AP method to solve first the model and then they applied the NLP suboptimization problem for the fixed structure so as to determine the optimal flowrates of the split streams if these take place. [Pg.371]

Most of the optimization techniques in use today have been developed since the end of World War II. Considerable advances in computer architecture and optimization algorithms have enabled the complexity of problems that are solvable via optimization to steadily increase. Initial work in the field centered on studying linear optimization problems (linear programming, or LP), which is still used widely today in business planning. Increasingly, nonlinear optimization problems (nonlinear programming, or NLP) have become more and more important, particularly for steady-state processes. [Pg.134]

Sargent, R.W.H., and D.J. Sebastian, "Numerical Experience with Algorithms for Unconstrained Minimization" in F.A. Lootsma (Ed)., "Numerical Methods for Nonlinear Optimization", Academic Press 1972, pp45-68. [Pg.53]

Given specifications other than temperature and pressure, in principle, nonlinear programming algorithms can optimize the appropriate thermodynamic function. When H and P are specified, entropy is maximized. But with compositions as iteration variables, the relation S = H,P,n) is needed. Similarly, given S and P, enthalpy is minimized but with compositions as iteration variables, H = H ,P,n is needed. Since these relations are usually not available, entropy can be maximized using Equation (3) and enthalpy minimized using Equation (5), with temperature and compositions as iteration variables. [Pg.130]

Another introductory note may provide further incentive for novice optimizers to read on. Despite extensive developments in optimization methods in the last decade, large-scale nonlinear optimization still remains an art that requires considerable computing experience, algorithm familiarity, and intuition. In general, black box minimization implementations, even those using state-of-the-art algorithms, are only partially successful. [Pg.2]

Table 1 presents the results of kinetie ealeulations with the use of the two-exponential kinetic model within the context of the nonlinear optimization algorithm. The differential absorption spectra of two K2 triplet components (Figure 2) attributed to different types of dye-DNA complexes were obtained from the kinetic analysis of the experimental data. The two-exponential character of the triplet state decay kinetics for thiacarbocyanine dyes bound to the biopolymer may be explained by the formation of two different types of complexes (superficial binding and intercalation) [11, 12]. In dye-DNA complexes of these two types the triplet states of bound dye molecules should possess different spectral and kinetic characteristics, which are reflected in the two-exponential character of the triplet state decay kinetics. [Pg.68]

Linear Models. Variable selection approaches can be applied in combination with both linear and nonlinear optimization algorithms. Exhaustive analysis of all possible combinations of descriptor subsets to find a specific subset of variables that affords the best correlation with the target property is practically impossible because of the combinatorial nature of this problem. Thus, stochastic sampling approaches such as genetic or evolutionary algorithms (GA or EA) or simulated annealing (SA) are employed. To illustrate one such application we shall consider the GA-PLS method, which was implemented as follows (136). [Pg.61]

The solution of the nonlinear optimization problem (PIO) gives us a lower bound on the objective function for the flowsheet. However, the cross-flow model may not be sufficient for the network, and we need to check for reactor extensions that improve our objective function beyond those available from the cross-flow reactor. We have already considered nonisothermal systems in the previous section. However, for simultaneous reactor energy synthesis, the dimensionality of the problem increases with each iteration of the algorithm in Fig. 8 because the heat effects in the reactor affect the heat integration of the process streams. Here, we check for CSTR extensions from the convex hull of the cross-flow reactor model, in much the same spirit as the illustration in Fig. 5, except that all the flowsheet constraints are included in each iteration. A CSTR extension to the convex hull of the cross-flow reactor constitutes the addition of the following terms to (PIO) in order to maximize (2) instead of [Pg.279]


See other pages where Algorithms nonlinear optimization is mentioned: [Pg.488]    [Pg.77]    [Pg.4]    [Pg.22]    [Pg.27]    [Pg.272]    [Pg.53]    [Pg.111]    [Pg.87]    [Pg.211]    [Pg.526]    [Pg.282]    [Pg.12]    [Pg.68]    [Pg.68]    [Pg.70]    [Pg.109]    [Pg.110]    [Pg.256]    [Pg.466]    [Pg.467]    [Pg.35]    [Pg.78]    [Pg.550]    [Pg.149]    [Pg.217]    [Pg.35]    [Pg.569]    [Pg.255]    [Pg.51]    [Pg.930]    [Pg.519]   
See also in sourсe #XX -- [ Pg.87 ]




SEARCH



Optimization algorithms

Optimization nonlinear

© 2024 chempedia.info