Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Gauss-Newton Method

Once again, we restate that in the least squares method, our objective is to find the vector of parameters b such that it minimizes the sum of squared residuals 4 . Thus, the vector b may be found by taking the partial derivative of O with respect to b and setting it to zero  [Pg.490]

Because Y is nonlinear with respect to the parameters, Eq. (7.165) will yield a nonlinear equation that would be difficult to solve for b. This problem was alleviated by Gauss, who determined that fitting nonlinear functions by least squares can be achieved by an iterative method involving a series of linear approximations. At each stage of the iteration, linear squares theory can be used to obtain the next approximation. [Pg.490]

This method, known as the Gauss-Newton method, converts the nonlinear problem into a linear one by approximating the function F by a Taylor series expansion around an estimated value of the parameter vector b  [Pg.490]

Taking the partial derivative of O with respect to Aft, setting it equal to zero, and solving for Aft, we obtain [Pg.491]

Minimization of S(k) can be accomplished by using almost any technique available from optimization theory. Next we shall present the Gauss-Newton method as we have found it to be overall the best one (Bard, 1970). [Pg.50]

Let us assume that an estimate k,jl is available at the j 1 iteration. We shall try to obtain a better estimate, k(J+l). Linearization of the model equations around kw yields, [Pg.50]

Neglecting all higher order terms (H.O.T.), the model output at ka+1) can be approximated by [Pg.51]

Solution of the above equation using any standard linear equation solver yields Ak(i+I). The next estimate of the parameter vector, k l), is obtained as [Pg.51]


Kinetic curves were analyzed and the further correlations were determined with a nonlinear least-square-method PC program, working with the Gauss-Newton method. [Pg.265]

As seen in Chapter 2 a suitable measure of the discrepancy between a model and a set of data is the objective function, S(k), and hence, the parameter values are obtained by minimizing this function. Therefore, the estimation of the parameters can be viewed as an optimization problem whereby any of the available general purpose optimization methods can be utilized. In particular, it was found that the Gauss-Newton method is the most efficient method for estimating parameters in nonlinear models (Bard. 1970). As we strongly believe that this is indeed the best method to use for nonlinear regression problems, the Gauss-Newton method is presented in detail in this chapter. It is assumed that the parameters are free to take any values. [Pg.49]

In this chapter we are focusing on a particular technique, the Gauss-Newton method, for the estimation of the unknown parameters that appear in a model described by a set of algebraic equations. Namely, it is assumed that both the structure of the mathematical model and the objective function to be minimized are known. In mathematical terms, we are given the model... [Pg.49]

More elaborate techniques have been published in the literature to obtain optimal or near optimal stepping parameter values. Essentially one performs a univariate search to determine the minimum value of the objective function along the chosen direction (Ak ) by the Gauss-Newton method. [Pg.52]

Formulation of the Solution Steps for the Gauss-Newton Method Two Consecutive Chemical Reactions... [Pg.53]

Equations 4.14 and 4.15 are used to evaluate the model response and the sensitivity coefficients that are required for setting up matrix A and vector b at each iteration of the Gauss-Newton method. [Pg.54]

Finally, an important advantage of the Gauss-Newton method is also that at the end of the estimation besides the best parameter estimates their covariance matrix is also readily available without any additional computations. Details will be given in Chapter 11. [Pg.55]

Starting with the initial guess k(0)=[l, 1, 1]T the Gauss-Newton method easily converged to the parameter estimates within 4 iterations as shown in Table 4.7. In the same table the standard error (%) in the estimation of each parameter is also shown. Bard (1970) also reported the same parameter estimates [0.08241, 1.1330, 2.3437] starting from the same initial guess. [Pg.65]

Table 4.8 Parameter Estimates at Each Iteration of the Gauss-Newton Method for Numerical Example I with Initial Guess [100000, 1,1]... Table 4.8 Parameter Estimates at Each Iteration of the Gauss-Newton Method for Numerical Example I with Initial Guess [100000, 1,1]...
The Gauss-Newton method arises when the second order terms on the right hand side of Equation 5.20 are ignored. As seen, the Hessian matrix used in Equation 5.11 contains only first derivatives of the model equations f(x,k). Leaving out the second derivative containing terms may be justified by the fact that these terms contain the residuals e, as factors. These residuals are expected to be small quantities. [Pg.75]

The Gauss-Newton method is directly related to Newton s method. The main difference between the two is that Newton s method requires the computation of second order derivatives as they arise from the direct differentiation of the objective function with respect to k. These second order terms are avoided when the Gauss-Newton method is used since the model equations are first linearized and then substituted into the objective function. The latter constitutes a key advantage of the Gauss-Newton method compared to Newton s method, which also exhibits quadratic convergence. [Pg.75]

In this chapter we are concentrating on the Gauss-Newton method for the estimation of unknown parameters in models described by a set of ordinary differential equations (ODEs). [Pg.84]

If the dimensionality of the problem is not excessively high, simultaneous integration of the state and sensitivity equations is the easiest approach to implement the Gauss-Newton method without the need to store x(t) as a function of time. The latter is required in the evaluation of the Jacobeans in Equation 6.9 during the solution of this differential equation to obtain G(t). [Pg.88]

THE GAUSS-NEWTON METHOD - NONLINEAR OUTPUT RELATIONSHIP... [Pg.92]

If we consider the limiting case where p=0 and q O, i.e., the case where there are no unknown parameters and only some of the initial states are to be estimated, the previously outlined procedure represents a quadratically convergent method for the solution of two-point boundary value problems. Obviously in this case, we need to compute only the sensitivity matrix P(t). It can be shown that under these conditions the Gauss-Newton method is a typical quadratically convergent "shooting method." As such it can be used to solve optimal control problems using the Boundary Condition Iteration approach (Kalogerakis, 1983). [Pg.96]

The 21 equations (given as Equation 6.68) should be solved simultaneously with the three state equations (Equation 6.64). Integration of these 24 equations yields x(t) and G(t) which are used in setting up matrix A and vector b at each iteration of the Gauss-Newton method. Given the complexity of the ODEs when the dimensionality of the problem increases, it is quite helpful to have a general purpose computer program that sets up the sensitivity equations automatically. [Pg.110]

Furthermore, since analytical derivatives are subject to user input error, numerical evaluation of the derivatives can also be used in a typical computer implementation of the Gauss-Newton method. Details for a successful implementation of the method are given in Chapter 8. [Pg.110]

Furthermore, they showed that this simplified QM is very similar to the Gauss-Newton method. Next the quasilinearization method as well as the simplified quasilinearization method are described and the equivalence of QM to the Gauss-Newton method is demonstrated. [Pg.111]

If we compare Equations 6.79 and 6.11 we notice that the only difference between the quasilinearization method and the Gauss-Newton method is the nature of the equation that yields the parameter estimate vector k(l+l). If one substitutes Equation 6.81 into Equation 6.79 obtains the following equation... [Pg.114]

By taking the last term on the right hand side of Equation 6.83 to the left hand side one obtains Equation 6.11 that is used for the Gauss-Newton method. Hence, when the output vector is linearly related to the state vector (Equation 6.2) then the simplified quasilinearization method is computationally identical to the Gauss-Newton method. [Pg.114]

The above equation represents a set of p nonlinear equations which can be solved to obtain koutput vector around the trajectory xw(t). Kalogerakis and Luus (1983b) showed that when linearization of the output vector is used, the quasilinearization computational algorithm and the Gauss-Newton method yield the same results. [Pg.114]

The above parameter estimation problem can now be solved with any estimation method for algebraic models. Again, our preference is to use the Gauss-Newton method as described in Chapter 4. [Pg.120]

As seen, there is significant variability in the estimates. This is the reason why we should avoid using this technique if possible (unless we wish to generate initial guesses for the Gauss-Newton method for ODE systems). As it was mentioned earlier, the numerical computation of derivatives from noisy data is a risky business ... [Pg.132]

If we have very little information about the parameters, direct search methods, like the LJ optimization technique presented in Chapter 5, present an excellent way to generate very good initial estimates for the Gauss-Newton method. Actually, for algebraic equation models, direct search methods can be used to determine the optimum parameter estimates quite efficiently. However, if estimates of the uncertainty in the parameters are required, use of the Gauss-Newton method is strongly recommended, even if it is only for a couple of iterations. [Pg.139]

Quite often the direction determined by the Gauss-Newton method, or any other gradient method for that matter, is towards the optimum, however, the length of the suggested increment of the parameters could be too large. As a result, the value of the objective function at the new parameter estimates could actually be higher than its value at the previous iteration. [Pg.139]

Once an acceptable value for the step-size has been determined, we can continue and with only one additional evaluation of the objective function, we can obtain the optimal step-size that should be used along the direction suggested by the Gauss-Newton method. [Pg.140]

The above expression for the optimal step-size is used in the calculation of the next estimate of the parameters to be used in the next iteration of the Gauss-Newton method,... [Pg.141]

In order to improve the convergence characteristics and robustness of the Gauss-Newton method, Levenberg in 1944 and later Marquardt (1963) proposed to modify the normal equations by adding a small positive number, y2, to the diagonal elements of A. Namely, at each iteration the increment in the parameter vector is obtained by solving the following equation... [Pg.144]


See other pages where The Gauss-Newton Method is mentioned: [Pg.6]    [Pg.50]    [Pg.55]    [Pg.55]    [Pg.55]    [Pg.65]    [Pg.66]    [Pg.85]    [Pg.135]    [Pg.137]    [Pg.144]   


SEARCH



Gauss

Gauss-Newton

Gauss-Newton method

Newton method

The Gauss-Newton Method - Nonlinear Output Relationship

The Gauss-Newton Method for Discretized PDE Models

The Gauss-Newton Method for PDE Models

The Newton method

© 2024 chempedia.info