Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Minimal function value

Figure 9.4 Plots of the minimal-function value over 55 Shake-and-Bake cycles for (a) a solution and (b) a non-solution. Trial 5 illustrates the characteristic features of a solution - a small initial decrease in / mim value followed by a plateau, a sharp decrease over a few cycles, and a second plateau at a lower value. The first plateau may be shortened significantly if convergence to a solution occurs quickly. Figure 9.4 Plots of the minimal-function value over 55 Shake-and-Bake cycles for (a) a solution and (b) a non-solution. Trial 5 illustrates the characteristic features of a solution - a small initial decrease in / mim value followed by a plateau, a sharp decrease over a few cycles, and a second plateau at a lower value. The first plateau may be shortened significantly if convergence to a solution occurs quickly.
When we have N measures for the exit variables in a process, the technical problem of identification of the unknown parameter resides in solving the equation (p) = 0. From the theoretical viewpoint, all the methods recommended for the solution of the transcendent equation can be used to determine parameter p. The majority of these methods are of iterative type and require an expression or an evaluation of the (p) derivate. When we evaluate the derivate numerically, as in the case of a complex process model, then important deviations can be introduced into the iteration chain. Indeed, the deviation propagation usually results in an increasing and non-realistic value of the parameter. This problem can be avoided by solving the equation (p) =0 by integral methods such as the method of minimal function value (MFV). When (p) values are only obtained in the area of influence of parameter p, the MFV method is reduced to a dialogue with the mathematical model of the process and then the smallest (p) value gives the best value for the parameter. [Pg.167]

Fig. 7. Total energy as a function of setting angle 0. The minimal energy value corresponds to a setting angle of 42.4°. Fig. 7. Total energy as a function of setting angle 0. The minimal energy value corresponds to a setting angle of 42.4°.
The simple-minded approach for minimizing a function is to step one variable at a time until the function has reached a minimum, and then switch to another variable. This requires only the ability to calculate the function value for a given set of variables. However, as tlie variables are not independent, several cycles through tlie whole set are necessary for finding a minimum. This is impractical for more than 5-10 variables, and may not work anyway. Essentially all optimization metliods used in computational chemistry tlius assume that at least the first derivative of the function with respect to all variables, the gradient g, can be calculated analytically (i.e. directly, and not as a numerical differentiation by stepping the variables). Some metliods also assume that tlie second derivative matrix, the Hessian H, can be calculated. [Pg.316]

Theorem 1 (the maximum principle) Let y P) const be a grid function defined on a connected grid w and let both conditions (2) and (4) hold. Then the condition Cy P) < 0 (C y P) > 0) on the grid w implies that y(P) cannot attain the maximal positive (minimal negative) value at the inner nodes P E u>. [Pg.260]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

The methods considered in the rest of this chapter are generally termed descent methods for minimization because a given step is pursued only if it yields an improved value for the objective function. First we cover methods that use function values or first or second derivatives in Section 5.3, followed by a review of several methods that use only function values in Section 5.4. [Pg.157]

Another class of methods of unidimensional minimization locates a point x near x, the value of the independent variable corresponding to the minimum of /(x), by extrapolation and interpolation using polynomial approximations as models of/(x). Both quadratic and cubic approximation have been proposed using function values only and using both function and derivative values. In functions where/ (x) is continuous, these methods are much more efficient than other methods and are now widely used to do line searches within multivariable optimizers. [Pg.166]

These formulas agree with the previous results for b = 1. The minimal objective value, sometimes called the optimal value function, is... [Pg.272]

Scheel et al, exposed 75 rats to ozone at 2 ppm for 3 h and measured pulmonary function immediately after removal of the animals from the exposure chamber. Minute ventilation, tidal volume, and oxygen uptake decreased immediately after exposure and reached minimal recorded values after 8 h. At 20 h after exposure, all measurements had returned to normal. Pulmonary edema may have been responsible for the observations reported. [Pg.332]

Graph 6 Free Fe3+ concentration as a function of pH value and the resulting minimal pKs value of Iron Blue, depending on its stability at the corresponding pH value. pKs value acc. to Tananaev et al. 40.5 according to reflections made here greater than 123, smaller than... [Pg.174]

Another specific VFF feature is that it relies on the transferability of the force-constants from one molecule to chemically and structurally related systems. Thus a set / , optimized for simpler and well studied substances is used as a trial force field for the system under consideration. Due to the ill-conditioned nature of IVP special measures have to be taken in order to keep the adjustable force-constants as close as possible to the initial trial set. One possible approach is to restrict them in a physically meaningful interval of say 10% around the starting values. Alternatively a penalty function can be added to the minimized functional (4) [4] ... [Pg.342]

Figure 3.92. The Stern- Volmer constant k at uA — u( — I/t as a function of the dimensionless concentration cv (v = 4tia3/3, a — 7A, ka/v — 43ns, kd — 5ns 1, D — 100A2/ ns = 10 cm2/s) (a) the entire concentration dependence from the minimal (ideal) value Ko up to... Figure 3.92. The Stern- Volmer constant k at uA — u( — I/t as a function of the dimensionless concentration cv (v = 4tia3/3, a — 7A, ka/v — 43ns, kd — 5ns 1, D — 100A2/ ns = 10 cm2/s) (a) the entire concentration dependence from the minimal (ideal) value Ko up to...
In general, precautions are needed to ensure that a minimum for f( ) has been bracketed in the trial interval. For example (see Figure 10, for the first step), if (a) the new slope is positive (g(Xt) > 0) or (b) the new function value is greater than the old (/(Xt) > /(0)), then there is a relative minimum along Pa between x(0) and x(Xt). We can proceed to find it by minimization of a cubic (or quadratic in special cases) interpolant however, if neither of these conditions holds, the function has decreased and continues to decrease at x(Xt). [Pg.22]

Minimization methods that incorporate only function values generally involve some systematic method to search the conformational space. In coordinate descent methods, the search directions are the standard basis vectors. A sweep through these n search vectors produces a sequential modification of one function variable at a time. Through repeated sweeping of the n-dimensional space, a local minimum might ultimately be found. Unfortunately, this strategy is inefficient and not reliable.3 4... [Pg.29]

We assume that the function value and gradient are evaluated together in an operations (additions and multiplications), where n is the problem size and a is a problem-dependent number. The Hessian can then be computed in (a/2)n(n + 1) operations. When a sparse preconditioner M is used, we denote its number of nonzeros by m and the number of nonzeros in its Cholesky factor, L, by /. (We assume here that M either is positive-definite or is factored by a modified Cholesky factorization.) Thus M can be computed in about (a/2)nm operations. As discussed in the previous section, it is advantageous to reorder the variables a priori to minimize the fill-in for M. Alternatively, a precon-... [Pg.47]

Some important considerations have to be taken into account in order to efficiently use the SSM and all other gradient methods with rapid displacement towards a minimum function value [3.64] (i) the good selection of the base point (ii) the modification of the parameters dimension from one step to another (iii) the complexity of the process surface response (iv) the number of constraints imposed on the parameters. In some cases we can couple the minimization of the function with the constraint relations in a more complex function, which will be analyzed again. In this case, the problem is similar to the Lagrange problem but it is much more complex. [Pg.152]

In the case where a = 0, the parametric functional is equal to the stabilizing functional for which there exists a model minimizing its value. [Pg.44]


See other pages where Minimal function value is mentioned: [Pg.134]    [Pg.134]    [Pg.183]    [Pg.236]    [Pg.317]    [Pg.319]    [Pg.15]    [Pg.707]    [Pg.254]    [Pg.155]    [Pg.362]    [Pg.388]    [Pg.402]    [Pg.43]    [Pg.102]    [Pg.133]    [Pg.134]    [Pg.135]    [Pg.135]    [Pg.51]    [Pg.15]    [Pg.236]    [Pg.225]    [Pg.801]    [Pg.131]    [Pg.215]    [Pg.675]    [Pg.26]    [Pg.171]    [Pg.675]    [Pg.151]    [Pg.433]   
See also in sourсe #XX -- [ Pg.167 ]




SEARCH



Function minimization

Minimizing functional

Value functions

© 2024 chempedia.info