Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression, parameter estimation iterations

Second card FORMAT(8F10.2), control variables for the regression. This program uses a Newton-Raphson type iteration which is susceptible to convergence problems with poor initial parameter estimates. Therefore, several features are implemented which help control oscillations, prevent divergence, and determine when convergence has been achieved. These features are controlled by the parameters on this card. The default values are the result of considerable experience and are adequate for the majority of situations. However, convergence may be enhanced in some cases with user supplied values. [Pg.222]

When estimates of k°, k, k", Ky, and K2 have been obtained, a calculated pH-rate curve is developed with Eq. (6-80). If the experimental points follow closely the calculated curve, it may be concluded that the data are consistent with the assumed rate equation. The constants may be considered adjustable parameters that are modified to achieve the best possible fit, and one approach is to use these initial parameter estimates in an iterative nonlinear regression program. The dissociation constants K and K2 derived from kinetic data should be in reasonable agreement with the dissociation constants obtained (under the same experimental conditions) by other means. [Pg.290]

The process rnust be iterated until convergence and the final estimates are denoted with Plb, bi,LB, and colb- The individual regression parameter can be therefore estimated by replacing the final fixed effects and random effects estimates in the function g so that ... [Pg.99]

The structure of such models can be exploited in reducing the dimensionality of the nonlinear parameter estimation problem since, the conditionally linear parameters, kl5 can be obtained by linear least squares in one step and without the need for initial estimates. Further details are provided in Chapter 8 where we exploit the structure of the model either to reduce the dimensionality of the nonlinear regression problem or to arrive at consistent initial guesses for any iterative parameter search algorithm. [Pg.10]

Linear models with respect to the parameters represent the simplest case of parameter estimation from a computational point of view because there is no need for iterative computations. Unfortunately, the majority of process models encountered in chemical engineering practice are nonlinear. Linear regression has received considerable attention due to its significance as a tool in a variety of disciplines. Hence, there is a plethora of books on the subject (e.g., Draper and Smith, 1998 Freund and Minton, 1979 Hocking, 1996 Montgomery and Peck, 1992 Seber, 1977). The majority of these books has been written by statisticians. [Pg.23]

In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]

The process of research in chemical systems is one of developing and testing different models for process behavior. Whether empirical or mechanistic models are involved, the discipline of statistics provides data-based tools for discrimination between competing possible models, parameter estimation, and model verification for use in this enterprise. In the case where empirical models are used, techniques associated with linear regression (linear least squares) are used, whereas in mechanistic modeling contexts nonlinear regression (nonlinear least squares) techniques most often are needed. In either case, the statistical tools are applied most fruitfully in iterative strategies. [Pg.207]

With this as an estimate of the assay measurement variance, the SIMEX algorithm was applied. Figure 2.10 plots the mean regression parameter against varying values of X using 1000 iterations for each value of X. Extrapolation of X to — 1 for both the slope and intercept leads to a SIMEX equation of... [Pg.83]

The Matlab Simulink Model was designed to represent the model stmctuie and mass balance equations for SSF and is shown in Fig. 6. Shaded boxes represent the reaction rates, which have been lumped into subsystems. To solve the system of ordinary differential equations (ODEs) and to estimate unknown parameters in the reaction rate equations, the inter ce parameter estimation was used. This program allows the user to decide which parameters to estimate and which type of ODE solver and optimization technique to use. The user imports observed data as it relates to the input, output, or state data of the SimuUnk model. With the imported data as reference, the user can select options for the ODE solver (fixed step/variable step, stiff/non-stiff, tolerance, step size) as well options for the optimization technique (nonlinear least squares/simplex, maximum number of iterations, and tolerance). With the selected solver and optimization method, the unknown independent, dependent, and/or initial state parameters in the model are determined within set ranges. For this study, nonlinear least squares regression was used with Matlab ode45, which is a Rimge-Kutta [3, 4] formula for non-stiff systems. The steps of nonlinear least squares regression are as follows ... [Pg.385]

The nonlinear regression analyses with default options (automatic initial parameter estimations with the constraints of n > 0 and h > 0 no weight fit and iterations of 200 with the step size of 1 and the tolerance of le °) are initiated following selection of Finish. [Pg.421]

The parameters defining C are non-hnear parameters and cannot be fitted exphcitly, they need to be computed iteratively. Estimates are provided, a matrix C constructed and this is compared to the measurement according to the steps that follow below. Once this is complete it is px)ssible to calculate shifts in these parameter estimates in a way that will improve the fit (i.e. reduce the square sum) when a new C is computed. This iterative improvement of the non-hnear p>arameters is the basis of the non-linear regression algorithm at the heart of most fitting programs. [Pg.50]

As mentioned earlier non-linear regression is an iterative process and, provided the initial parameter estimates are not too poor and the model is not under-determined by the data, will converge to a unique minimum yielding the best fit parameters. With more complex models it is often necessary to fix certain parameters (either rate constants, equilibrium constants or complete spectra) particularly if they are known through independent investigations and most fitting applications will allow this type of constraint to be applied. [Pg.50]

The mathematical solution of the pharmacokinetic model depicted by Figure 5 Is described by Equation 5, where K12 and K23 are first order rate constants analogous to Ka and Ke, respectively. This solution was applied to the data and "best fit" parameters estimated by Iterative computational methods. The "fit" of the data to the kinetic model was analyzed by least squares nonlinear regression analysis ( ). [Pg.13]

The PLS approach was developed around 1975 by Herman Wold and co-workers for the modeling of complicated data sets in terms of chains of matrices (blocks), so-called path models . Herman Wold developed a simple but efficient way to estimate the parameters in these models called NIPALS (nonlinear iterative partial least squares). This led, in turn, to the acronym PLS for these models, where PLS stood for partial least squares . This term describes the central part of the estimation, namely that each model parameter is iteratively estimated as the slope of a simple bivariate regression (least squares) between a matrix column or row as the y variable, and another parameter vector as the x variable. So, for instance, in each iteration the PLS weights w are re-estimated as u X/(u u). Here denotes u transpose, i.e., the transpose of the current u vector. The partial in PLS indicates that this is a partial regression, since the second parameter vector (u in the... [Pg.2007]

There have also been attempts to describe the temporal aspects of perception from first principles, the model including the effects of adaptation and integration of perceived stimuli. The parameters in the specific analytical model derived were estimated using non-linear regression [14]. Another recent development is to describe each individual TI-curve,/j(r), i = 1, 2,..., n, as derived from a prototype curve, S t). Each individual Tl-curve can be obtained from the prototype curve by shrinking or stretching the (horizontal) time axis and the (vertical) intensity axis, i.e. fff) = a, 5(b, t). The least squares fit is found in an iterative procedure, alternately adapting the parameter sets (a, Zi, for 1=1,2,..., n and the shape of the prototype curve [15],... [Pg.444]

Five critical points for the methane-n-hexane system in the temperature range of 198 to 273 K measured by Lin et al. (1977) are available. By employing the Trebble-Bishnoi EoS in our critical point regression least squares estimation method, the parameter set (k , kb) was found to be the optimal one. Convergence from an initial guess of (ka,kb=0.001, -0.001) was achieved in six iterations. The estimated values are given in Table 14.8. [Pg.264]

In this work, we first regressed the isothermal data. The estimated parameters from the treatment of the isothermal data are given in Table 16.6. An initial guess of (ki=l.O, k2=1.0, k3=1.0) was used for all isotherms and convergence of the Gauss-Newton method without the need for Marquardt s modification was achieved in 13, 16 and 15 iterations for the data at 375, 400, and 425°C respectively. [Pg.289]

Table 2.4 shows the SAS NLIN specifications and the computer output. You can choose one of the four iterative methods modified Gauss-Newton, Marquardt, gradient or steepest-descent, and multivariate secant or false position method (SAS, 1985). The Gauss-Newton iterative methods regress the residuals onto the partial derivatives of the model with respect to the parameters until the iterations converge. You also have to specify the model and starting values of the parameters to be estimated. It is optional to provide the partial derivatives of the model with respect to each parameter, b. Figure 2.9 shows the reaction rate versus substrate concentration curves predicted from the Michaelis-Menten equation with parameter values obtained by four different... [Pg.26]


See other pages where Regression, parameter estimation iterations is mentioned: [Pg.179]    [Pg.579]    [Pg.581]    [Pg.232]    [Pg.258]    [Pg.460]    [Pg.38]    [Pg.870]    [Pg.105]    [Pg.38]    [Pg.253]    [Pg.877]    [Pg.430]    [Pg.144]    [Pg.155]    [Pg.2307]    [Pg.432]    [Pg.893]    [Pg.51]    [Pg.300]    [Pg.270]    [Pg.274]    [Pg.147]    [Pg.760]    [Pg.54]    [Pg.281]    [Pg.343]    [Pg.112]    [Pg.134]   
See also in sourсe #XX -- [ Pg.679 ]




SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iterative

Parameter estimation

Regression estimation

© 2024 chempedia.info