Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gauss-Newton optimization

The other regressed properties for the method are found in Tables 5.7 and 5.8. The parameters listed in this monograph are a result of a multivariate Gauss-Newton optimization by Ballard (2002) to which the reader should refer if a more detailed explanation of the method and fitted parameters is required. [Pg.285]

As seen in Chapter 2 a suitable measure of the discrepancy between a model and a set of data is the objective function, S(k), and hence, the parameter values are obtained by minimizing this function. Therefore, the estimation of the parameters can be viewed as an optimization problem whereby any of the available general purpose optimization methods can be utilized. In particular, it was found that the Gauss-Newton method is the most efficient method for estimating parameters in nonlinear models (Bard. 1970). As we strongly believe that this is indeed the best method to use for nonlinear regression problems, the Gauss-Newton method is presented in detail in this chapter. It is assumed that the parameters are free to take any values. [Pg.49]

Minimization of S(k) can be accomplished by using almost any technique available from optimization theory. Next we shall present the Gauss-Newton method as we have found it to be overall the best one (Bard, 1970). [Pg.50]

More elaborate techniques have been published in the literature to obtain optimal or near optimal stepping parameter values. Essentially one performs a univariate search to determine the minimum value of the objective function along the chosen direction (Ak ) by the Gauss-Newton method. [Pg.52]

If we consider the limiting case where p=0 and q O, i.e., the case where there are no unknown parameters and only some of the initial states are to be estimated, the previously outlined procedure represents a quadratically convergent method for the solution of two-point boundary value problems. Obviously in this case, we need to compute only the sensitivity matrix P(t). It can be shown that under these conditions the Gauss-Newton method is a typical quadratically convergent "shooting method." As such it can be used to solve optimal control problems using the Boundary Condition Iteration approach (Kalogerakis, 1983). [Pg.96]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

Once an acceptable value for the step-size has been determined, we can continue and with only one additional evaluation of the objective function, we can obtain the optimal step-size that should be used along the direction suggested by the Gauss-Newton method. [Pg.140]

The above expression for the optimal step-size is used in the calculation of the next estimate of the parameters to be used in the next iteration of the Gauss-Newton method,... [Pg.141]

We strongly suggest the use of the reduced sensitivity whenever we are dealing with differential equation models. Even if the system of differential equations is non-stiff at the optimum (when k=k ), when the parameters are far from their optimal values, the equations may become stiff temporarily for a few iterations of the Gauss-Newton method. Furthermore, since this transformation also results in better conditioning of the normal equations, we propose its use at all times. This transformation has been implemented in the program for ODE systems provided with this book. [Pg.149]

In this section we first present an efficient step-size policy for differential equation systems and we present two approaches to increase the region of convergence of the Gauss-Newton method. One through the use of the Information Index and the other by using a two-step procedure that involves direct search optimization. [Pg.150]

A simple procedure to overcome the problem of the small region of convergence is to use a two-step procedure whereby direct search optimization is used to initially to bring the parameters in the vicinity of the optimum, followed by the Gauss-Newton method to obtain the best parameter values and estimates of the uncertainty in the parameters (Kalogerakis and Luus, 1982). [Pg.155]

Figure 8.4 Use of the L) optimization procedure to bring the first parameter estimates inside the region of convergence of the Gauss-Newton method (denoted by the solid line). All test points are denoted by +. Actual path of some typical runs is shown by the dotted line. Figure 8.4 Use of the L) optimization procedure to bring the first parameter estimates inside the region of convergence of the Gauss-Newton method (denoted by the solid line). All test points are denoted by +. Actual path of some typical runs is shown by the dotted line.
The user supplied weighting constant, (>0), should have a large value during the early iterations of the Gauss-Newton method when the parameters are away from their optimal values. As the parameters approach the optimum, should be reduced so that the contribution of the penalty function is essentially negligible (so that no bias is introduced in the parameter estimates). [Pg.164]

Indeed, using the Gauss-Newton method with an initial estimate of k(0)=(450, 7) convergence to the optimum was achieved in three iterations with no need to employ Marquardt s modification. The optimal parameter estimates are k = 420.2 8.68% and k2= 5.705 24.58%. It should be noted however that this type of a model can often lead to ill-conditioned estimation problems if the data have not been collected both at low and high values of the independent variable. The convergence to the optimum is shown in Table 17.5 starting with the initial guess k(0)=(l, 1). [Pg.326]

In Chapter 4 the Gauss-Newton method for systems described by algebraic equations is developed. The method is illustrated by examples with actual data from the literature. Other methods (indirect, such as Newton, Quasi-Newton, etc., and direct, such as the Luus-Jaakola optimization procedure) are presented in Chapter 5. [Pg.447]


See other pages where Gauss-Newton optimization is mentioned: [Pg.188]    [Pg.188]    [Pg.66]    [Pg.153]    [Pg.162]    [Pg.257]    [Pg.264]    [Pg.306]    [Pg.310]    [Pg.311]    [Pg.316]    [Pg.344]    [Pg.372]    [Pg.64]    [Pg.163]    [Pg.123]    [Pg.153]    [Pg.156]    [Pg.159]    [Pg.484]    [Pg.127]    [Pg.316]    [Pg.371]    [Pg.504]    [Pg.614]    [Pg.113]    [Pg.87]   
See also in sourсe #XX -- [ Pg.188 ]




SEARCH



Gauss

Gauss-Newton

© 2024 chempedia.info