Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Single Variable Optimizations

Unconstrained Optimization Unconstrained optimization refers to the case where no inequahty constraints are present and all equahty constraints can be eliminated by solving for selected dependent variables followed by substitution for them in the objec tive func tion. Veiy few reahstic problems in process optimization are unconstrained. However, it is desirable to have efficient unconstrained optimization techniques available since these techniques must be applied in real time and iterative calculations cost computer time. The two classes of unconstrained techniques are single-variable optimization and multivariable optimization. [Pg.744]

Some plants have been using computer control for 20 years. Control systems in industrial use typically consist of individual feedback and feedforward loops. Horst and Enochs [Engineering h- Mining]., 181(6), 69-171 (1980)] reported that installation of single-variable automatic controls improved performance of 20 mineral processing plants by 2 to 10 percent. But interactions among the processes make it difficult for independent controllers to control the circuit optimally. [Pg.1839]

The golden section search is the optimization analog of a binary search. It is used for functions of a single variable, F a). It is faster than a random search, but the difference in computing time will be trivial unless the objective function is extremely hard to evaluate. [Pg.207]

Figure 3.5 Region elimination for the optimization of a single variable. Figure 3.5 Region elimination for the optimization of a single variable.
A more sophisticated method for optimization of a single variable is Newton s method, which exploits first and second derivatives of the objective function. Newton s method starts by supposing that the following equation needs to be solved ... [Pg.38]

The method of steepest descent uses only first-order derivatives to determine the search direction. Alternatively, Newton s method for single-variable optimization can be adapted to carry out multivariable optimization, taking advantage of both first- and second-order derivatives to obtain better search directions1. However, second-order derivatives must be evaluated, either analytically or numerically, and multimodal functions can make the method unstable. Therefore, while this method is potentially very powerful, it also has some practical difficulties. [Pg.40]

One method of optimization for a function of a single variable is to set up as fine a grid as you wish for the values of x and calculate the function value for every point on the grid. An approximation to the optimum is the best value of /(x). Although this is not a very efficient method for finding the optimum, it can yield acceptable results. On the other hand, if we were to utilize this approach in optimizing a multivariable function of more than, say, five variables, the computer time is quite likely to become prohibitive, and the accuracy is usually not satisfactory. [Pg.155]

In optimization of a function of a single variable, we recognize (as for general multivariable problems) that there is no substitute for a good first guess for the starting point in the search. Insight into the problem as well as previous experience... [Pg.156]

Optimisation may be used, for example, to minimise the cost of reactor operation or to maximise conversion. Having set up a mathematical model of a reactor system, it is only necessary to define a cost or profit function and then to minimise or maximise this by variation of the operational parameters, such as temperature, feed flow rate or coolant flow rate. The extremum can then be found either manually by trial and error or by the use of numerical optimisation algorithms. The first method is easily applied with MADONNA, or with any other simulation software, if only one operational parameter is allowed to vary at any one time. If two or more parameters are to be optimised this method becomes extremely cumbersome. To handle such problems, MADONNA has a built-in optimisation algorithm for the minimisation of a user-defined objective function. This can be activated by the OPTIMIZE command from the Parameter menu. In MADONNA the use of parametric plots for a single variable optimisation is easy and straight-forward. It often suffices to identify optimal conditions, as shown in Case A below. [Pg.79]

Simultaneous Optimization of Density and Temperature. Although near-baseline resolution was achieved for all eight sample components via the optimization of a single variable (density), as illustrated in Figure 1, a better (or in rare cases, equal) result will always be obtained if all variables of interest are optimized. The window diagram method is now considered for the simultaneous optimization of density and temperature for the separation of the eight component sample of Table VI, to provide a comparison with the SFC separation obtained with the density-only optimization (Figure 6). [Pg.332]

Because introducing the reader to actual optimization techniques is beyond the scope of this book, let us only indicate here that with the model obtained the analyst or technician is able to find the optimum value of y by partially differentiating the regression function. Setting each differential to zero he or she he may find the optimum values of the single variables. Substitution of these values into the model equation will yield the optimum value of y. [Pg.85]

Efficient single-variable numerical techniques for optimization are important beyond their implementation for one-dimensional problems because they form the basis of most multivariable techniques. Three classes of tech-... [Pg.136]

The dynamic optimisation problem P2 now results in a single variable algebraic optimisation problem. The only variable to be optimised is the batch time t. The solution of the problem does no longer require full integration of the model equations. This method will solve the maximum profit problem very cheaply under frequently changing market prices of (CD/, CB0, C ) and will thus determine new optimum batch time for the plant. The optimal values of C, Dh r, QR, etc. can now be determined using the functions represented by Equations 9.2-9.5. [Pg.286]

Eqn (29) may be included into the Newton-Raphson iteration as the n-th equation to determine all the intermediate as well as the final pressure. This however, requires subsequent derivation of the extra row in the Jacobian by the differentiation of eqn (29) with respect to the vector x of all pressures. This leads to fairly involved algebraic expression, so the quickest and safest method of calculating the choked flow conditions in the line segment is by a simple single variable optimization of y from eqn (27) with respect to the final pressure. The vector x is conputed frcm eqn (24) by the straight forward Newton-Raphson iteration for each step in the single variable hillclimbing. [Pg.188]

As a conclusion, ODS materials are understandably the most widely applied RPLC stationary phases, and the stationary phase chain length is a variable that will usually not be of interest as a single variable for the optimization of selectivity. [Pg.58]

There are many cases in which the factor being optimized is a function of a single variable. The procedure then becomes very simple. Consider the example presented in Fig. 11-1, where it is necessary to obtain the insulation thickness which gives the least total cost. The primary variable involved is the thickness of the insulation, and relationships can be developed showing how this variable affects all costs. [Pg.344]

In our study of search problems we have seen that single variable systems can be optimized with ease two-variable systems, with some effort and multivariable systems, only with extreme difficulty if at all. As more variables enter a search problem, the number of experiments needed grows rapidly, and the unimodality assumption becomes less and less plausible. Thus our investigation of search problems leads directly to interaction problems, where the criterion of effectiveness depends on so many factors that it is impractical, or even impossible, to find the optimum by conventional methods. Successful techniques for solving interaction problems involve decomposing a big system into several smaller ones, as we have already done with our lines of search. [Pg.292]

Let us now consider the situation in which we optimize the solvent composition of the mobile phase and let us first suppose there are three solvents A, B and C involved. The aim is to know how much of each of them should be pre.sent. The methods of Section 6.3 are now no longer applicable since they can be applied only when there is a single variable. At first sight we could conclude that there are three variables, the contents of respectively A, B and C and that we could therefore apply in a first step the two- or more-level factorial designs of Section 6.4. If we suppose that the experimental domain goes from 0% to 100% for each of these variables, this would yield for a two-level design Table 6.13. [Pg.209]


See other pages where Single Variable Optimizations is mentioned: [Pg.187]    [Pg.716]    [Pg.744]    [Pg.431]    [Pg.37]    [Pg.38]    [Pg.396]    [Pg.397]    [Pg.90]    [Pg.155]    [Pg.153]    [Pg.465]    [Pg.34]    [Pg.365]    [Pg.34]    [Pg.540]    [Pg.568]    [Pg.148]    [Pg.132]   
See also in sourсe #XX -- [ Pg.37 ]

See also in sourсe #XX -- [ Pg.396 ]

See also in sourсe #XX -- [ Pg.26 ]

See also in sourсe #XX -- [ Pg.396 ]




SEARCH



© 2024 chempedia.info