Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gauss-Newton iteration

Table 2.4 shows the SAS NLIN specifications and the computer output. You can choose one of the four iterative methods modified Gauss-Newton, Marquardt, gradient or steepest-descent, and multivariate secant or false position method (SAS, 1985). The Gauss-Newton iterative methods regress the residuals onto the partial derivatives of the model with respect to the parameters until the iterations converge. You also have to specify the model and starting values of the parameters to be estimated. It is optional to provide the partial derivatives of the model with respect to each parameter, b. Figure 2.9 shows the reaction rate versus substrate concentration curves predicted from the Michaelis-Menten equation with parameter values obtained by four different... [Pg.26]

Models nonlinear in 6 need careful treatment. Direct iteration with Eq. (6.3-10) often fails because of the limited range of the expansions 6). Gauss-Newton iteration schemes with steps adjusted by line search work well [Booth, Box, Muller and Peterson (1958) Hartley (1961, 1964) Box and Kanemasu (1972, 1984) Bard (1974) Bock (1981)] when A e is well-conditioned and 6 unrestricted, but give difficulty otherwise. [Pg.102]

In problems like this one which are very difficult to converge, we should use Marquardt s modification first to reduce the LS objective function as much as possible. Then we can approach closer to the global minimum in a sequential way by letting only one or two parameters to vary at a time. The estimated standard errors for the parameters provide excellent information on what the next step should be. For example if we use as an initial guess ki=10 for all the parameters and a constant Marquardt s parameter y=10 , the Gauss-Newton iterates lead to k=[2.6866,... [Pg.331]

Since Q and G are known at step n, Eq. (4.17) is an equation for the m components of A. This may be solved by various iterative strategies. The simplest is Gauss-Newton iteration, which may be written... [Pg.157]

Theorem C.l (Convergence of the Gauss-Newton iteration). [5] Assume that... [Pg.269]

To find a minimum of the functional Q 9), the Gauss-Newton iterative method or its modifications based on linear approximation of the regression function in the neighborhood of a point 0 are used ... [Pg.197]

Equations 4.14 and 4.15 are used to evaluate the model response and the sensitivity coefficients that are required for setting up matrix A and vector b at each iteration of the Gauss-Newton method. [Pg.54]

Starting with the initial guess k(0)=[l, 1, 1]T the Gauss-Newton method easily converged to the parameter estimates within 4 iterations as shown in Table 4.7. In the same table the standard error (%) in the estimation of each parameter is also shown. Bard (1970) also reported the same parameter estimates [0.08241, 1.1330, 2.3437] starting from the same initial guess. [Pg.65]

Table 4.7 Parameter Estimates at Each Iteration of the Gauss-Newton Methodfor Numerical Example-Iwith Initial Guess [1, I, 1]... Table 4.7 Parameter Estimates at Each Iteration of the Gauss-Newton Methodfor Numerical Example-Iwith Initial Guess [1, I, 1]...
If we consider the limiting case where p=0 and q O, i.e., the case where there are no unknown parameters and only some of the initial states are to be estimated, the previously outlined procedure represents a quadratically convergent method for the solution of two-point boundary value problems. Obviously in this case, we need to compute only the sensitivity matrix P(t). It can be shown that under these conditions the Gauss-Newton method is a typical quadratically convergent "shooting method." As such it can be used to solve optimal control problems using the Boundary Condition Iteration approach (Kalogerakis, 1983). [Pg.96]

The 21 equations (given as Equation 6.68) should be solved simultaneously with the three state equations (Equation 6.64). Integration of these 24 equations yields x(t) and G(t) which are used in setting up matrix A and vector b at each iteration of the Gauss-Newton method. Given the complexity of the ODEs when the dimensionality of the problem increases, it is quite helpful to have a general purpose computer program that sets up the sensitivity equations automatically. [Pg.110]

Kalogerakis and Luus (1983b) compared the computational effort required by Gauss-Newton, simplified quasilinearization and standard quasilinearization methods. They found that all methods produced the same new estimates at each iteration as expected. Furthermore, the required computational time for the Gauss-Newton and the simplified quasilinearization was the same and about 90% of that required by the standard quasilinearization method. [Pg.114]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

Quite often the direction determined by the Gauss-Newton method, or any other gradient method for that matter, is towards the optimum, however, the length of the suggested increment of the parameters could be too large. As a result, the value of the objective function at the new parameter estimates could actually be higher than its value at the previous iteration. [Pg.139]

Normally, we stop the step-size determination here and we proceed to perform another iteration of Gauss-Newton method. This is what has been implemented in the computer programs accompanying this book. [Pg.140]

The above expression for the optimal step-size is used in the calculation of the next estimate of the parameters to be used in the next iteration of the Gauss-Newton method,... [Pg.141]

In order to improve the convergence characteristics and robustness of the Gauss-Newton method, Levenberg in 1944 and later Marquardt (1963) proposed to modify the normal equations by adding a small positive number, y2, to the diagonal elements of A. Namely, at each iteration the increment in the parameter vector is obtained by solving the following equation... [Pg.144]

The required modifications to the Gauss-Newton algorithm presented in Chapter 4 are rather minimal. At each iteration, we just need to add the following terms to matrix A and vector b,... [Pg.146]

We strongly suggest the use of the reduced sensitivity whenever we are dealing with differential equation models. Even if the system of differential equations is non-stiff at the optimum (when k=k ), when the parameters are far from their optimal values, the equations may become stiff temporarily for a few iterations of the Gauss-Newton method. Furthermore, since this transformation also results in better conditioning of the normal equations, we propose its use at all times. This transformation has been implemented in the program for ODE systems provided with this book. [Pg.149]

The above unconstrained estimation problem can be solved by a small modification of the Gauss-Newton method. Let us assume that we have an estimate kw of the parameters at the j iteration. Linearization of the model equation and the constraint around kw yields,... [Pg.159]

The user supplied weighting constant, (>0), should have a large value during the early iterations of the Gauss-Newton method when the parameters are away from their optimal values. As the parameters approach the optimum, should be reduced so that the contribution of the penalty function is essentially negligible (so that no bias is introduced in the parameter estimates). [Pg.164]

If we are certain that the optimum parameter estimates lie well within the constraint boundaries, the simplest way to ensure that the parameters stay within the boundaries is through the use of the bisection rule. Namely, during each iteration of the Gauss-Newton method, if anyone of the new parameter estimates lie beyond its boundaries, then vector Ak +I) is halved, until all the parameter constraints are satisfied. Once the constraints are satisfied, we proceed with the determination of the step-size that will yield a reduction in the objective function as already discussed in Chapters 4 and 6. [Pg.165]


See other pages where Gauss-Newton iteration is mentioned: [Pg.310]    [Pg.374]    [Pg.395]    [Pg.93]    [Pg.94]    [Pg.373]    [Pg.270]    [Pg.3077]    [Pg.310]    [Pg.374]    [Pg.395]    [Pg.93]    [Pg.94]    [Pg.373]    [Pg.270]    [Pg.3077]    [Pg.6]    [Pg.55]    [Pg.55]    [Pg.65]    [Pg.66]    [Pg.135]    [Pg.153]    [Pg.162]   
See also in sourсe #XX -- [ Pg.157 ]




SEARCH



Gauss

Gauss-Newton

ITER

Iterated

Iteration

Iteration iterator

Iterative

Newton iteration

© 2024 chempedia.info