Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization merit function

The mathematical procedure for a single merit function optimization for many design variables involves derivatives of the merit function with respect to each of the design variables (as a generalization to multiple... [Pg.430]

Numerous demonstrations in recent years have shown that the level of performance of present-day polymer electrolyte fuel cells can compete with current energy conversion technologies in power densities and energy efficiencies. However, for large-scale commercialization in automobile and portable applications, the merit function of fuel cell systems—namely, the ratio of power density to cost—must be improved by a factor of 10 or more. Clever engineering and empirical optimization of cells and stacks alone cannot achieve such ambitious performance and cost targets. [Pg.419]

The nonlinear least squares fitting is to find a set of optimal parameters (P) that make the nonlinear model Y =/(X P) produce the results for the object (y) most close to a set of reference values. The overall performance for the parameters can be described with a merit function (F(P)), i.e., the sum of squares of all deviations between the reference values and the values from the model... [Pg.71]

It is, therefore, reasonable to think that the problem of solving nonlinear systems can be tackled by a minimization program applied to a merit function of this kind. Unfortunately, multimodality issues arise if this is done and many additional minima, to which the real solution does not correspond, are introduced, transforming the original problem of the nonlinear system into a multidimensional multimodal optimization problem. [Pg.239]

Once we select a merit function, it can be minimized using the unconstrained optimization methods (see Chapter 3). The gradient method is just one of the methods available. The function changes more rapidly in the direction of the gradient. With reference to the merit function (7.12), the gradient in x is given by... [Pg.244]

As far as multidimensional optimization problems are concerned, the matrix B can be a bad approximation of the Hessian (provided it is positive definite) yet still be able to guarantee a reduction in the merit function. Conversely, the matrix B involved in the solution of nonlinear systems should be a good estimate of the Jacobian. [Pg.247]

The parameter y guarantees a satisfactory improvement in the merit functions analogously to what discussed in the multidimensional optimization. Note that the relation (7.51) can be motivated as follows. To have a satisfactory Newton s prediction (Chapter 3) ... [Pg.248]

Many different methods attempt to minimize merit functions when the Newton estimate is unsatisfactory. However, there is an additional difference between this approach and multidimensional optimization problems. [Pg.253]

In optimization problems, the Hessian is only occasionally ill-conditioned at the function minimum. In the solution of nonlinear equations systems, the Jacobian matrix may become singular when the gradient of the merit function approaches zero in correspondence with the minimum of the same function. [Pg.254]

Since one-dimensional optimization is both relatively fast and efficient, it is usually adopted in all-purpose solvers. Normally, the one-dimensional search algorithm is not pushed to the extreme. Actually, the optimization is intended to identify a new point where Newton s method might easily converge. The merit function usually adopted is the weighted one (7.14). The following data are known ... [Pg.254]

In the BzzConstrainedMinimization class, the new point obtained with Xi+i = Xi + di is accepted if either the merit function i or the function (13.88) decrease. However, the options described in Chapter 3 for unconstrained optimization are adopted. [Pg.470]

The Maratos effect is related to the fact that only one merit function is used in a constrained optimization problem the function must simultaneously account for... [Pg.471]

Another important feature of GAs is that they are tunable. This means that we can define the algorithm s fitness function to reflect the actual requirements from the catalyst. An optimal catalyst exhibits high activity, high stability, and high selectivity. These three figures of merit are directly related to the product yield, the turnover number (TON) and the turnover frequency (TOF), respectively. Often, however, an increase in one comes at the expense of another. Using GAs you can... [Pg.264]

There are a few points with respect to this procedure that merit discussion. First, there is the Hessian matrix. With elements, where n is the number of coordinates in the molecular geometry vector, it can grow somewhat expensive to construct this matrix at every step even for functions, like those used in most force fields, that have fairly simple analytical expressions for their second derivatives. Moreover, the matrix must be inverted at every step, and matrix inversion formally scales as where n is the dimensionality of the matrix. Thus, for purposes of efficiency (or in cases where analytic second derivatives are simply not available) approximate Hessian matrices are often used in the optimization process - after aU, the truncation of the Taylor expansion renders the Newton-Raphson method intrinsically approximate. As an optimization progresses, second derivatives can be estimated reasonably well from finite differences in the analytic first derivatives over the last few steps. For the first step, however, this is not an option, and one typically either accepts the cost of computing an initial Hessian analytically for the level of theory in use, or one employs a Hessian obtained at a less expensive level of theory, when such levels are available (which is typically not the case for force fields). To speed up slowly convergent optimizations, it is often helpful to compute an analytic Hessian every few steps and replace the approximate one in use up to that point. For really tricky cases (e.g., where the PES is fairly flat in many directions) one is occasionally forced to compute an analytic Hessian for every step. [Pg.45]

Optimization problems are by their nature mathematical in nature. The first and perhaps the most difficult step is to determine how to mathematically model the system to be optimized (for example, paint mixing, chemical reactor, national economy, environment). This model consists of an objective function, constraints, and decision variables. The objective function is often called the merit or cost function this is the expression to be optimized that is the performance measure. For example, in Fig. 3 the objective function would be the total cost. The constraints are equations that describe the model of the process (for example, mass balances) or inequality relationships (insulation thickness >0 in the above example) among the variables. The decision variables constitute the independent variables that can be changed to optimize the system. [Pg.134]

Roughly speaking, for a complete chromatogram, the criterion cp behaves similarly to IP. It functions as a threshold criterion with diffuse (and stepwise) boundaries, establishing areas for which adequate separation is obtained (

other hand, tp may more easily be calculated if the capacity factors and the plate number are known. Both

optimization processes run on the final analytical column. In the following discussions tp will not be considered as a separate criterion. Its merits correspond to those of the IP criterion. [Pg.145]


See other pages where Optimization merit function is mentioned: [Pg.377]    [Pg.426]    [Pg.184]    [Pg.64]    [Pg.329]    [Pg.614]    [Pg.614]    [Pg.413]    [Pg.199]    [Pg.135]    [Pg.184]    [Pg.626]    [Pg.626]    [Pg.502]    [Pg.73]    [Pg.138]    [Pg.556]    [Pg.21]    [Pg.346]    [Pg.134]    [Pg.153]    [Pg.42]    [Pg.498]    [Pg.41]    [Pg.23]    [Pg.122]    [Pg.255]    [Pg.190]    [Pg.213]    [Pg.218]    [Pg.218]    [Pg.220]    [Pg.160]    [Pg.318]   
See also in sourсe #XX -- [ Pg.427 , Pg.434 ]




SEARCH



Merit function

Merits

Optimization function

Optimization functional

© 2024 chempedia.info