Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Direct Optimization Algorithms

The pattern searches implement the method of Hooke and Jeeves using a line search and the method of Rosenbrock [6]. The Hooke and Jeeves algorithm that was used in this work can be thought of in two separate phases an exploratory search and a pattern search. The exploratory search successively perturbs each parameter and tests the resulting performance. The pattern search steps along the (iTfc+i-a ) direction, which is the direction between the last two points selected by the exploratory search. When both positive and negative perturbations of the parameters do not result in enhanced performance, the perturbation size is decreased. When the perturbation size is less than an arbitrary termination factor, e, the algorithm stops. [Pg.197]

The fixed step-size parameter, fi, used in the TAG algorithm was also calculated during the tuning phase. From the theory of the LMS algorithm it can be shown that for stability the step size must satisfy the condition [7] [Pg.198]

In practice, the convergence parameter was chosen to be 25% of the upper bound. The above derivation assumes the use of a transversal filter and so is applicable to the adaptation of the finite impulse response (FIR) filter used in this work. [Pg.198]

Each of these algorithms was used to adaptively update a two-weight FIR filter and stabilize a Rijke-tube combustor through acoustic actuation. Both the gradient descent and pattern search methods proved quite effective and produced 40 to 50 dB of attenuation of the instability peak. For example, Fig. 18.6 shows the power spectral density of the uncontrolled tube and the system controlled with the Hooke and Jeeves algorithm. The steady-state results of all the [Pg.198]


In the following sections, we will discuss a range of different MPO approaches that have been applied to lead optimization, ranging from simple rule-based approaches to sophisticated search or directed optimization algorithms [4]. For each, we will discuss the relative pros and cons and give illustrative examples of their application. [Pg.158]

Dirac equation one-electron relativistic quantum mechanics formulation direct integral evaluation algorithm that recomputes integrals when needed distance geometry an optimization algorithm in which some distances are held fixed... [Pg.362]

There are two basic types of unconstrained optimization algorithms (I) those reqmring function derivatives and (2) those that do not. The nonderivative methods are of interest in optimization applications because these methods can be readily adapted to the case in which experiments are carried out directly on the process. In such cases, an ac tual process measurement (such as yield) can be the objec tive function, and no mathematical model for the process is required. Methods that do not reqmre derivatives are called direc t methods and include sequential simplex (Nelder-Meade) and Powell s method. The sequential simplex method is quite satisfac tory for optimization with two or three independent variables, is simple to understand, and is fairly easy to execute. Powell s method is more efficient than the simplex method and is based on the concept of conjugate search directions. [Pg.744]

Owing to the constraints, no direct solution exists and we must use iterative methods to obtain the solution. It is possible to use bound constrained version of optimization algorithms such as conjugate gradients or limited memory variable metric methods (Schwartz and Polak, 1997 Thiebaut, 2002) but multiplicative methods have also been derived to enforce non-negativity and deserve particular mention because they are widely used RLA (Richardson, 1972 Lucy, 1974) for Poissonian noise and ISRA (Daube-Witherspoon and Muehllehner, 1986) for Gaussian noise. [Pg.405]

The direct optimization of a single response formulation modelled by either a normal or pseudocomponent equation is accomplished by the incorporation of the component constraints in the Complex algorithm. Multiresponse optimization to achieve a "balanced" set of property values is possible by the combination of response desirability factors and the Complex algorithm. Examples from the literature are analyzed to demonstrate the utility of these techniques. [Pg.58]

Efforts directed toward the evaluation and modification of optimization algorithms for mixtures have resulted in the... [Pg.62]

Fig. 2.16 An efficient optimization algorithm knows approximately in which direction to move and how far to step, in an attempt to reach the optimized structure in relatively few (commonly about five to ten) steps... Fig. 2.16 An efficient optimization algorithm knows approximately in which direction to move and how far to step, in an attempt to reach the optimized structure in relatively few (commonly about five to ten) steps...
The data shown in this section demonstrate that the simultaneous optimization of the solute geometry and the solvent polarization is possible and it provides the same results as the normal approach. In the case of CPCM it already performs better than the normal scheme, even with a simple optimization algorithm, and it will probably be the best choice when large molecules are studied (when the PCM matrices cannot be kept in memory). This functional can thus be directly used to perform MD simulations in solution without considering explicit solvent molecules but still taking into account the dynamics of the solvent. On the other hand, the DPCM functional presents numerical difficulties that must be studied and overcome in order to allow its use for dynamic simulations in solution. [Pg.77]

A. Famulari, E. Gianinetti, M. Raimondi, M. Sironi, Int. J. Quant. Chem. 69, 151 (1998). Implementation of Gradient-Optimization Algorithms and Force Constant Computations in BSSE-Free Direct and Conventional SCF Approaches. [Pg.261]

In the next two sections we briefly detail the problem. Sections 4 and 5 describe two aspects of optimization algorithms, the line search and the search direction. These sections are somewhat lengthy as befits "art," for there is still a good deal of artistry in optimizing functions that do not explicitly show dependence on their independent variables. In section 4 we give an overview of the various suggested procedures in the context of a normal mode analysis where the "physics" of the situation is somewhat more transparent. [Pg.242]

The unavoidable necessity of using computer-based core models introduces the further disadvantage that, in general, the derivative information required by most sophisticated optimization algorithms is not directly available. To evaluate these derivatives by small difference techniques can be a very lengthy process when there are many control variables (as there are in realistic reload core problems) and is not always reliable in the face of highly nonlinear functions. [Pg.206]

Enzyme activity is assumed to decay exponentially over the experiment. Fast controller response in both directions can be observed. Compared with the uncontrolled case, the controller controls the product purity and compensates the drift in the enzyme activity. The evolution of the results of the optimization algorithm during each cycle is plotted as a dashed line, shifted by one cycle to the right in order to vitalize the convergence. This shows that a feasible solution is found rapidly and that the controller can be implemented under real-time conditions. In this example, the control horizon was set to two cycles and the prediction horizon was set to ten cycles. A diagonal matrix i j = 0.02 I (3,3) was chosen for regularization. [Pg.411]


See other pages where Direct Optimization Algorithms is mentioned: [Pg.196]    [Pg.196]    [Pg.196]    [Pg.196]    [Pg.323]    [Pg.74]    [Pg.79]    [Pg.107]    [Pg.31]    [Pg.61]    [Pg.66]    [Pg.70]    [Pg.70]    [Pg.30]    [Pg.199]    [Pg.134]    [Pg.177]    [Pg.157]    [Pg.311]    [Pg.68]    [Pg.127]    [Pg.26]    [Pg.50]    [Pg.118]    [Pg.43]    [Pg.44]    [Pg.197]    [Pg.255]    [Pg.618]    [Pg.139]    [Pg.14]    [Pg.616]    [Pg.620]    [Pg.620]    [Pg.243]    [Pg.258]   


SEARCH



Direct optimization

Directed optimization

Optimization algorithms

© 2024 chempedia.info