Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix of parameter estimates

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

Furthermore, the implementation of the Gauss-Newton method also incorporated the use of the pseudo-inverse method to avoid instabilities caused by the ill-conditioning of matrix A as discussed in Chapter 8. In reservoir simulation this may occur for example when a parameter zone is outside the drainage radius of a well and is therefore not observable from the well data. Most importantly, in order to realize substantial savings in computation time, the sequential computation of the sensitivity coefficients discussed in detail in Section 10.3.1 was implemented. Finally, the numerical integration procedure that was used was a fully implicit one to ensure stability and convergence over a wide range of parameter estimates. [Pg.372]

As already discussed in Chapter 11, matrix A calculated during each iteration of the Gauss-Newton method can be used to determine the covariance matrix of the estimated parameters, which in turn provides a measure of the accuracy of the parameter estimates (Tan and Kalogerakis, 1992). [Pg.376]

Variance-covariance matrix of the vector of parameter estimates b... [Pg.180]

We will follow the guidance of Albert Einstein to make everything as simple as possible, but not simpler. The reader will find practical formulae to compute results like the correlation matrix, but will also be reminded that there exist other possibilities of parameter estimation, like the robust or nonparametric estimation of a correlation. [Pg.17]

This is the general matrix solution for the set of parameter estimates that gives the minimum sum of squares of residuals. Again, the solution is valid for all models that are linear in the parameters. [Pg.79]

Using matrix least squares techniques (see Section 5.2), the chosen linear model may be fit to the data to obtain a set of parameter estimates, B, from which predicted values of response, y, may be obtained. It is convenient to define a matrix of estimated responses, F. [Pg.156]

Finally, we turn to an entirely different question involving confidence limits. Suppose we were to carry out the experiments indicated by the design matrix of Equation 11.15 a second time. We would probably not obtain the same set of responses we did the first time (Equation 11.16), but instead would have a different Y matrix. This would lead to a different set of parameter estimates, B, and a predicted response surface that in general would not be the same as that shown in Figure 11.4. A third repetition of the experiments would lead to a third predicted response surface, and so on. The question, then, is what limits can we construct about these response surfaces so that in a given percentage of cases, those limits will include the entire true response surface ... [Pg.221]

Computing the sensitivities is time consuming. Fortunately the direct integral approximation of the sensitivity matrix and its principal component analysis can offer almost the same information whenever the direct integral method of parameter estimation applies. [Pg.313]

For random sampling from the classical regression model in (17-3), reparameterize the likelihood function in terms of 77 = 1/cr and 8 = (1 o)P- Find the maximum likelihood estimators of 77 and 8 and obtain the a symptotic covariance matrix of the estimators of these parameters. [Pg.90]

Let us fit the probabilistic model, = P0 + ru, to the same data (see Figure 5.10). If the least squares approach to the fitting of this model is employed, the appropriate matrices and results are exactly those given in Section 5.2 where the same model was fit to the different factor levels xu = 3, yn = 3, xl2 = 6, yl2 = 5. This identical mathematics should not be surprising the model does not include a term for the factor xx and thus the matrix of parameter coefficients. A", should be the same for both sets of data. The parameter / 0 is again estimated to be 4, and ar2 is estimated to be 2. [Pg.82]

Each of these density functions has its maximum at the least-squares values of its included parameters. The resulting distributions of parameters are known as t-distributions. Such distributions were first investigated by Gossett (1908) for singleresponse problems of quality control at the Guinness brewery in Dublin. The covariance matrix of the estimated parameter vector 6e in Eq. (6.6-4) is... [Pg.109]

When the estimation procedure is clearly specified, an approximate covariance matrix of the estimate, Sj, can also be calculated. This matrix reflects the degree of precision of the estimate, and depends on the experimental design, parameters, and the noise statistics. A well-designed experiment with small random fluctuations will lead to precise estimations ( small covariance), while a small number of iminformative data and/or a high level of noise will produce unreliable estimates ( large covariance). [Pg.2948]

The variance-covariance matrix of the estimated parameters will be as shown on the next page. [Pg.65]

The idea here is to use the values for the thetas in the variance-covariance matrix of the estimate in the omega block, which are available in the NONMEM output file after a successful covariance step. It is necessary to use additive models for the rj values as well as adding an r/ on the parameter for the creatinine clearance relation theta(4). Note that the value in sigma is not used in the computations and can be set to anything. For further details of the code, please consult the NONMEM manuals and nmhelp (the online help system distributed with NONMEM). More precise reflections of the confidence and prediction intervals can be obtained by multiple simulations from the final model and suitable dosing/observation patterns followed by creation of piecewise prediction/confi-dence intervals from the simulated observations. [Pg.222]

Uncertainty distributions were defined for all parameters including typical PK, PD parameters, covariate effects, and interindividual and residual variance parameters. These distributions were derived from the variance-covariance matrix of the estimates obtained from a prior analysis, and from a review of prior knowledge and published results. [Pg.890]

Newton-Raphson approach, as opposed to other methods, such as an expectation-maximization approach, is that the matrix of second derivatives of the objective function, evaluated at the optima, is immediately available. By denoting this matrix, H, 2H, is an asymptotic variance-covariance matrix of the estimated parameters G and R. Another method to estimate G and R is the noniterative MIVQUEO method. Using Monte Carlo simulation, Swallow and Monahan (1984) have shown that REML and ML are better estimators than MIVQUEO, although MIVQUEO is better when REML and ML methods fail to converge. [Pg.188]

More generally, the errors of parameter estimation can be calculated on the basis of the variance-covariance matrix (Eq. (5.10)). The variance-covariance matrix is computed here on the basis of the MSS for the pure experimental error as follows ... [Pg.223]

Second, the quality of parameter estimates is also determined by the appropriateness of series extension without considering higher order terms. If the first order is not sufficient, divergences cannot be ruled out. In principle, Taylor expansion can also be performed by inclusion ofthe second derivative Hessian matrix). [Pg.261]

A correct knowledge of the error structure is needed in order to have a correct summary of the statistical properties of the estimates. This is a difficult task. Measurement errors are usually independent, and often a known distribution, for example, Gaussian, is assumed. Many properties of least squares hold approximately for a wide class of distributions if weights are chosen optimally, that is, equal to the inverse of the variances of the measurement errors, or at least inversely proportional to them if variances are known up to a proportionality constant, that is, is equal or proportional to Zy, the N x N covariance matrix of the measurement error v. Under these circumstances, an asymptotically correct approximation of the covariance matrix of the estimation error 0 = 0 — 0 can be used to evaluate the precision of parameter estimates ... [Pg.172]

In the Fisher approach, the Fisher information matrix /, which is the inverse of the lower bound of the covariance matrix, is treated as a function of the design variables and usually the determinant of / (this is called D-optimal design) is maximized in order to maximize precision of parameter estimates, and thus numerical identifiability. [Pg.174]


See other pages where Matrix of parameter estimates is mentioned: [Pg.83]    [Pg.78]    [Pg.214]    [Pg.71]    [Pg.169]    [Pg.238]    [Pg.263]    [Pg.83]    [Pg.78]    [Pg.214]    [Pg.71]    [Pg.169]    [Pg.238]    [Pg.263]    [Pg.579]    [Pg.378]    [Pg.91]    [Pg.311]    [Pg.448]    [Pg.2767]    [Pg.2592]    [Pg.303]    [Pg.307]    [Pg.113]    [Pg.185]    [Pg.244]    [Pg.67]    [Pg.160]    [Pg.193]   
See also in sourсe #XX -- [ Pg.78 ]

See also in sourсe #XX -- [ Pg.71 ]




SEARCH



Estimation of parameters

Matrix of parameters

Parameter estimation

Parameter matrix

© 2024 chempedia.info