Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Iterative optimization technique

A more subjective approach to the multiresponse optimization of conventional experimental designs was outlined by Derringer and Suich (22). This sequential generation technique weights the responses by means of desirability factors to reduce the multivariate problem to a univariate one which could then be solved by iterative optimization techniques. The use of desirability factors permits the formulator to input the range of property values considered acceptable for each response. The optimization procedure then attempts to determine an optimal point within the acceptable limits of all responses. [Pg.68]

Unconstrained Optimization Unconstrained optimization refers to the case where no inequahty constraints are present and all equahty constraints can be eliminated by solving for selected dependent variables followed by substitution for them in the objec tive func tion. Veiy few reahstic problems in process optimization are unconstrained. However, it is desirable to have efficient unconstrained optimization techniques available since these techniques must be applied in real time and iterative calculations cost computer time. The two classes of unconstrained techniques are single-variable optimization and multivariable optimization. [Pg.744]

While it is perfectly permissible to estimate a and b on this basis, the calculation can only be done in an iterative fashion, that is, both a and b are varied in increasingly smaller steps (see Optimization Techniques, Section 3.5) and each time the squared residuals are calculated and summed. The combination of a and b that yields the smallest of such sums represents the solution. Despite digital computers, Adcock s solution, a special case of the maximum likelihood method, is not widely used the additional computational effort and the more complicated software are not justified by the improved (a debatable notion) results, and the process is not at all transparent, i.e., not amenable to manual verification. [Pg.96]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

The MOs in eq 5 are typically optimized using a reorthogonalization technique that has been described by Gianinetti et al.,(30) though they can also be obtained using a Jacobi rotation method that sequentially and iteratively optimizes each individual orbital.(28,37)... [Pg.252]

Despite the limitations, EVOP is an extremely useful optimization technique. EVOP is robust, can handle many variables at the same time, and will always lead to an optimum. Also, because of its iterative nature, little needs to be known about the system before beginning the process. Most important, however, is the fact that it can be useful in plant optimization where the cost of running experiments using conditions that result in low yields or unusable product cannot be tolerated. In theory, the process improves at each step of the optimization scheme, making it ideal for a production situation. For application of EVOP to plant scale operations, see Refs. 12-14. [Pg.165]

Included in the methods discussed below are Newton-based methods (Section 10.3.1), the geometry optimization by direct inversion of the iterative subspace, or GDIIS, method (Section 10.3.2), QM/MM optimization techniques (Section 10.3.3), and algorithms designed to find surface intersections and points of closest approach (Section... [Pg.203]

In simple relaxation (the fixed approximate Hessian method), the step does not depend on the iteration history. More sophisticated optimization techniques use information gathered during previous steps to improve the estimate of the minimizer, usually by invoking a quadratic model of the energy surface. These methods can be divided into two classes variable metric methods and interpolation methods. [Pg.2336]

The Matlab Simulink Model was designed to represent the model stmctuie and mass balance equations for SSF and is shown in Fig. 6. Shaded boxes represent the reaction rates, which have been lumped into subsystems. To solve the system of ordinary differential equations (ODEs) and to estimate unknown parameters in the reaction rate equations, the inter ce parameter estimation was used. This program allows the user to decide which parameters to estimate and which type of ODE solver and optimization technique to use. The user imports observed data as it relates to the input, output, or state data of the SimuUnk model. With the imported data as reference, the user can select options for the ODE solver (fixed step/variable step, stiff/non-stiff, tolerance, step size) as well options for the optimization technique (nonlinear least squares/simplex, maximum number of iterations, and tolerance). With the selected solver and optimization method, the unknown independent, dependent, and/or initial state parameters in the model are determined within set ranges. For this study, nonlinear least squares regression was used with Matlab ode45, which is a Rimge-Kutta [3, 4] formula for non-stiff systems. The steps of nonlinear least squares regression are as follows ... [Pg.385]

Evolutionary computation is an umbrella term for a range of evolutionary optimization techniques mainly inspired by optimum-seeking mechanisms from the real world, such as natural selection and genetic inheritance, which simulate evolution processes on a computer to iteratively improve the performance of solutions until an optimal (or feasible at least) solution is obtained. [Pg.15]

FIGURE 8.18 The series of MMP-12 inhibitors used by Pickett et al. [40] to demonstrate automated, iterative, lead optimization techniques. Reprinted with permission from Pickett et al. [40], 2011 American Chemical Society. [Pg.173]

Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that diflerent solutions will be obtained even with different starting points in multiple mns of the algorithm. EA however are very popular approaches to obtain multiple solutions in a multi-modal optimization task. [Pg.435]


See other pages where Iterative optimization technique is mentioned: [Pg.116]    [Pg.516]    [Pg.116]    [Pg.516]    [Pg.174]    [Pg.742]    [Pg.73]    [Pg.861]    [Pg.52]    [Pg.63]    [Pg.203]    [Pg.185]    [Pg.566]    [Pg.859]    [Pg.864]    [Pg.218]    [Pg.300]    [Pg.281]    [Pg.73]    [Pg.593]    [Pg.288]    [Pg.113]    [Pg.85]    [Pg.73]    [Pg.746]    [Pg.119]    [Pg.264]    [Pg.268]    [Pg.28]    [Pg.125]   
See also in sourсe #XX -- [ Pg.516 ]




SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iterative

Iterative technique

Optimization iterative

Optimization techniques

Optimizing Technique

© 2024 chempedia.info