Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Newton-Gauss algorithm parameters

We have already given the equations for the computation of the standard errors in the parameters optimised by linear regression, equation (4.32). The equations are very similar for parameters that are passed through the Newton-Gauss algorithm. In fact, at the end of the iterative fitting, the relevant information has already been calculated. [Pg.161]

The central part of the Newton-Gauss algorithm is the computation of the residuals, which are now collected in the matrix R. R is a function of the measurement Y, the model, and the parameters. For the example, the parameters include the two rate constants k and fe, which we collect in the vector p and all molar absorptivities, all elements of the matrix A. For a given model we can write... [Pg.163]

Instead of developing a program that performs the task as just explained, we move to the 2-parameter case. Subsequently, we generalise to the np-parameter case and then we analyse the relationship with the Newton-Gauss algorithm for least-squares fitting. [Pg.199]

An additional observation for photon counting data there are no fractions of photons and thus the count can only include integer numbers. Thus the measurements in column B are rounded down to the nearest integer. It seems to be reasonable to do the same with the calculated values in column C. However, a test in Excel reveals that such an attempt does not work. The reason is, that the solver s Newton-Gauss algorithm requires the computation of the derivatives of the objective (x2 or ssq) with respect to the parameters. A rounding would destroy the continuity of the function and effectively wipe out the derivatives. [Pg.212]

The matrix C is defined by the non-linear parameters (rate constants). It is possible to minimise Ru, i.e. the corresponding ssq, as a function of these parameters in a normal Newton-Gauss algorithm. The chain of equations goes as follows... [Pg.258]

For a three component system, the matrix T has nine elements and thus it appears that C and eventually the sum of squares are a function of nine parameters. As we will see in a moment there are actually fewer, only six, parameters to be fitted. The idea of RFA is to use the Newton-Gauss algorithm to fit this rather small number of parameters in T. [Pg.291]

The Newton-Gauss algorithm requires initial estimates for the parameters in T. These can be computed from the same estimated concentration profiles Cguess as before (Figure 5-44). It is determined by... [Pg.291]

The Newton-Gauss algorithm (ng Jm3. m), is called from Main RFA.m, and requires a Matlab function that computes the residuals as a function of the parameters T, as defined in equation (5.54). This calculation is performed in the Matlab function Rcalc RFA. m. [Pg.292]

In this section we deal with estimating the parameters p in the dynamical model of the form (5.37). As we noticed, methods of Chapter 3 directly apply to this problem only if the solution of the differential equation is available in analytical form. Otherwise one can follow the same algorithms, but solving differential equations numerically whenever the computed responses are needed. The partial derivations required by the Gauss - Newton type algorithms can be obtained by solving the sensitivity equations. While this indirect method is... [Pg.286]

In general, the error e tic-q-i+j, 0) is a non-linear function of the parameter vector 0. Therefore, the above problem is a well-known nonlinear least squares problem (NLSP) that may be solved by various optimisation algorithms such as the Levenberg-Marquardt algorithm [2], the quasi-Newton method or the Gauss-Newton (GN) algorithm [3]. [Pg.124]

When the equations are nonlinear in the parameters, the parameter estimates are obtained by minimizing the objective function by methods like that of Newton-Raphson or that of Newton-Gauss or an adaptation of the latter such as the Marquardt algorithm [1963], In the latter case parameters are iteratively improved by the following formula ... [Pg.121]

The above method is the well-known Gauss-Newton method for differential equation systems and it exhibits quadratic convergence to the optimum. Computational modifications to the above algorithm for the incorporation of prior knowledge about the parameters (Bayessian estimation) are discussed in detail in Chapter 8. [Pg.88]

This indicates that after an initial overhead of 0.319 model runs to set up the algorithm, an additional 0.07 of a model-run was required for the computation of the sensitivity coefficients for each additional parameter. This is about 14 times less compared to the one additional model-run required by the standard implementation of the Gauss-Newton method. Obviously these numbers serve only as a guideline however, the computational savings realized through the efficient integration of the sensitivity ODEs are expected to be very significant whenever an implicit or semi-implicit reservoir simulator is involved. [Pg.375]

After 10 iterations of the Gauss-Newton method the LS objective function was reduced to 0.0147. The estimation problem as defined, is severely ill-conditioned. Although the algorithm did not converged, the estimation part of the program provided estimates of the standard deviation in the parameter values obtained thus far. [Pg.378]

The best-fitting set of parameters can be found by minimization of the objective function (Section 13.2.8.2). This can be performed only by iterative procedures. For this purpose several minimization algorithms can be applied, for example, Simplex, Gauss-Newton, and the Marquardt methods. It is not the aim of this chapter to deal with non-linear curve-fitting extensively. For further reference, excellent papers and books are available [18]. [Pg.346]

The relation (3.249) used for the iterative calculation allowing the identification of the unknown parameters is given here below. It is a particularization of the general Gauss-Newton algorithm (3.238) ... [Pg.164]

Hence, the Gauss-Newton algorithm solves the solution to h through a series of linear regressions. For the base algorithm a is set equal to 1. This iterative process is repeated until there is little change in the parameter values between iterations. When this point is achieved, convergence is said to have occurred. Ideally, at each iteration, f(x 0(l 1 ) should be closer to Y than f(x 0 ). [Pg.101]

Using starting values of 35 for C(0) and 0.01 for 0, only six iterations were required for convergence using the Gauss-Newton algorithm within Proc NLIN in SAS. The model parameter estimates were C(0) = 38.11 0.72 and 0 = 0.2066 0.009449 per hour. The matrix JTJ was... [Pg.112]

The value of the parameters in (12) was obtained by a nonlinear regression method using the Gauss-Newton Algorithm. Those values are ... [Pg.119]


See other pages where Newton-Gauss algorithm parameters is mentioned: [Pg.4]    [Pg.108]    [Pg.198]    [Pg.198]    [Pg.291]    [Pg.337]    [Pg.50]    [Pg.294]    [Pg.162]    [Pg.165]    [Pg.165]    [Pg.175]    [Pg.415]    [Pg.286]    [Pg.287]    [Pg.264]    [Pg.372]    [Pg.147]    [Pg.101]    [Pg.101]    [Pg.101]    [Pg.101]    [Pg.113]    [Pg.113]    [Pg.113]    [Pg.188]    [Pg.285]    [Pg.393]   
See also in sourсe #XX -- [ Pg.163 ]




SEARCH



Gauss

Gauss-Newton

© 2024 chempedia.info