The nonlinear regression analyses with default options (automatic initial parameter estimations with the constraints of n > 0 and h > 0 no weight fit and iterations of 200 with the step size of 1 and the tolerance of le °) are initiated following selection of Finish. [Pg.421]

In this work, we first regressed the isothermal data. The estimated parameters from the treatment of the isothermal data are given in Table 16.6. An initial guess of (ki=l.O, k2=1.0, k3=1.0) was used for all isotherms and convergence of the Gauss-Newton method without the need for Marquardt s modification was achieved in 13, 16 and 15 iterations for the data at 375, 400, and 425°C respectively. [Pg.289]

Linear models with respect to the parameters represent the simplest case of parameter estimation from a computational point of view because there is no need for iterative computations. Unfortunately, the majority of process models encountered in chemical engineering practice are nonlinear. Linear regression has received considerable attention due to its significance as a tool in a variety of disciplines. Hence, there is a plethora of books on the subject (e.g., Draper and Smith, 1998 Freund and Minton, 1979 Hocking, 1996 Montgomery and Peck, 1992 Seber, 1977). The majority of these books has been written by statisticians. [Pg.23]

In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]

The structure of such models can be exploited in reducing the dimensionality of the nonlinear parameter estimation problem since, the conditionally linear parameters, kl5 can be obtained by linear least squares in one step and without the need for initial estimates. Further details are provided in Chapter 8 where we exploit the structure of the model either to reduce the dimensionality of the nonlinear regression problem or to arrive at consistent initial guesses for any iterative parameter search algorithm. [Pg.10]

The mathematical solution of the pharmacokinetic model depicted by Figure 5 Is described by Equation 5, where K12 and K23 are first order rate constants analogous to Ka and Ke, respectively. This solution was applied to the data and "best fit" parameters estimated by Iterative computational methods. The "fit" of the data to the kinetic model was analyzed by least squares nonlinear regression analysis ( ). [Pg.13]

When estimates of 1, k, k", Kf, and K2 have been obtained, a calculated pH-rate curve is developed with Eq. (6-80). If the experimental points follow closely the calculated curve, it may be concluded that the data are consistent with the assumed rate equation. The constants may be considered adjustable parameters that are modified to achieve the best possible fit, and one approach is to use these initial parameter estimates in an iterative nonlinear regression program. The dissociation constants 1 and K2 derived from kinetic data should be in reasonable agreement with the dissociation constants obtained (under the same experimental conditions) by other means. [Pg.153]

The PLS approach was developed around 1975 by Herman Wold and co-workers for the modeling of complicated data sets in terms of chains of matrices (blocks), so-called path models . Herman Wold developed a simple but efficient way to estimate the parameters in these models called NIPALS (nonlinear iterative partial least squares). This led, in turn, to the acronym PLS for these models, where PLS stood for partial least squares . This term describes the central part of the estimation, namely that each model parameter is iteratively estimated as the slope of a simple bivariate regression (least squares) between a matrix column or row as the y variable, and another parameter vector as the x variable. So, for instance, in each iteration the PLS weights w are re-estimated as u X/(u u). Here denotes u transpose, i.e., the transpose of the current u vector. The partial in PLS indicates that this is a partial regression, since the second parameter vector (u in the [Pg.2007]

© 2019 chempedia.info