Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Standard Errors of the Parameters

We have already given the equations for the computation of the standard errors in the parameters optimised by linear regression, equation (4.32). The equations are very similar for parameters that are passed through the Newton-Gauss algorithm. In fact, at the end of the iterative fitting, the relevant information has already been calculated. [Pg.161]

As before, the standard error (Tpi in the fitted parameters pi can be estimated from the expression [Pg.161]

The denominator denotes the number of degrees of freedom, dfi it is defined as the number of experimental values m (elements of y), minus the total number of optimised parameters np. [Pg.161]

The implementation into nglm. m is straightforward. The curvature matrix needs to be passed back to the main program,  [Pg.161]

The routine below is essentially the same as Ma in chrom. m, which we have used earlier (p.J5S). The additions are, that nglm. m returns the Curvature matrix Curv and the few lines at the end for the actual computation and output of the standard deviations. [Pg.161]


Using a constant error for the measurement of the osmotic coefficient, estimate Pitzer s parameters as well as the standard error of the parameter estimates by minimizing the objective function given by Equation 15.1 and compare the results with the reported parameters. [Pg.279]

The objective function value is up to a constant, equal to minus twice the log-likelihood of the fit. Thus, a minimum objective function value reflects the maximum likelihood of the model parameters to describe the data best. The standard errors of the parameter estimates are also calculated by the maximum likelihood method. [Pg.460]

The fitting process was carried out with a program based on a least square procedure [48] that allows us to calculate the best-fitting parameters of the equation defining the relation A(T) versus z, that is, Equation 4.25, specifically, a, 7 0, and AT. The regression coefficient and the standard errors were also calculated with the least square methodology. The calculated regression coefficients fluctuated between 0.98 and 0.99. The values calculated for the parameters T0 and AT, and the standard errors of the parameters, are reported in Table 4.9. [Pg.186]

If the number of concentration measurements per subject were smaller than twice the number of fixed effects parameters and the standard errors of the parameter estimates were not provided, then some downweighting would be necessary. It is possible to compute the expected standard errors for any given study (e.g., see Retout et al. (23), but this is beyond the scope of this chapter). [Pg.151]

Bias correction can be dangerous in practice because of the high variabihty in its estimate. In spite of this, its estimation is usually worthwhile. If bias is small relative to the standard error of the parameter, then it is best to use 0 rather than 6. If the bias is large compared to the SE of the parameter, then it is an indication that 0 is not an appropriate estimate of the parameter 0. [Pg.414]

The positional parameters derived in the usual crystal structure determination represent the coordinates of the first moments of the distribution of instantaneous atomic centers as produced by zero point and thermal displacements from an equilibrium configuration. The distance between pairs of first-moment positions, together with a measure of precision derived in a straightforward way from the estimated standard errors of the parameters, constitutes a raw distance, and its precision measure is as reliable as are those of the positional parameters. This is the quantity usually quoted by crystallographers as raw or uncorrected interatomic distance. [Pg.221]

This example illustrates what may happen when an unidentifiable model is fit to data. The model may fail to converge or if it does, large standard errors of the parameter estimates may be observed despite an excellent goodness of fit. If the latter happens during model development then this is a good indication that the model is unidentifiable. [Pg.33]

The square roots of Var(0o) and Var(0 ) are called the standard error of the parameter estimates denoted as SE(0o) and SE(0i), respectively. The residual variance estimator, ct2, is estimated by... [Pg.59]

Rubin (1987) proposed that if m-imputed data sets are analyzed that have generated m-different sets of parameter estimates then these m-sets of parameter estimates need to be combined to generate a set of parameter estimates that takes into account the added variability from the imputed values. He proposed that if 0 and SE(0 ) are the parameter estimates and standard errors of the parameter estimates, respectively, from the ith imputed data set, then the point estimate for the m-multiple imputation data sets is... [Pg.89]

The multiple imputation standard error of the parameter estimate 0j is then the square root of Eq. (2.106). Examination of Eq. (2.106) shows that the multiple imputation standard error is a weighted sum of the within-and between-data set standard errors. As m increases to infinity the variance of the parameter estimate becomes the average of the parameter estimate variances. [Pg.89]

Most software packages use Eq. (3.47) as the default variance estimator. The standard error of the parameter estimates are computed as the square root of the diagonal elements of the variance-covariance matrix... [Pg.105]

Inferences on the parameter estimates are made the same as for a linear model, but the inferences are approximate. Therefore, using a T-test in nonlinear regression to test for some parameter being equal to zero or some other value is risky and should be discouraged (Myers, 1986). However, that is not to say that the standard errors of the parameter estimates cannot be used as a model discrimination criterion. Indeed, a model with small standard errors is a better model than a model with large standard errors, all other factors... [Pg.105]

Figure 3.13 Model parameter estimates as a function of the prior standard deviation for clearance. A 1-compartment model with absorption was fit to the data in Table 3.5 using a proportional error model and the SAAM II software system. Starting values were 5000 mL/h, 110 L, and 1.0 per hour for clearance (CL), volume of distribution (Vd), and absorption rate constant (ka), respectively. The Bayesian prior mean for clearance was fixed at 4500 mL/h while the standard deviation was systematically varied. The error bars represent the standard error of the parameter estimate. The open symbols are the parameter estimates when prior information is not included in the model. Figure 3.13 Model parameter estimates as a function of the prior standard deviation for clearance. A 1-compartment model with absorption was fit to the data in Table 3.5 using a proportional error model and the SAAM II software system. Starting values were 5000 mL/h, 110 L, and 1.0 per hour for clearance (CL), volume of distribution (Vd), and absorption rate constant (ka), respectively. The Bayesian prior mean for clearance was fixed at 4500 mL/h while the standard deviation was systematically varied. The error bars represent the standard error of the parameter estimate. The open symbols are the parameter estimates when prior information is not included in the model.
Bonate (2002), with the help of many contributors, compared the consistency across users of five different models in NONMEM-reported parameter estimates and their standard errors. All users used NONMEM V on a personal (31/38, 81%), Unix (6/38, 16%), or Macintosh (1/38, 3%) computer. Ten different compilers were tested. In those models that optimized without errors, the estimates of the fixed effects and variance components were 100% consistent across users, although there were some small differences in the estimates of the standard error of the parameter estimates. Different compilers produced small differences in the estimates of the stand-... [Pg.304]

The values in parentheses are the standard errors of the parameters. Since they are much smaller than the estimates of the regression coefficients, we conclude that aU three parameters are statistically significant. If a more rigorous analysis is necessary, we can perform a t test on each one. ... [Pg.230]

The design matrix, whose elements are the proportions used to prepare the various mixtures, is presented in Table 7.5, in terms of components and of pseudocomponents. Fig. 7.8b shows the geometric representation of the design in terms of pseudocomponents. The values of the two responses were determined in duphcate for every mixture. These values are also shown in Table 7.5 and were used to obtain a pooled estimate of the experimental variance, from which we can calculate the standard errors of the parameter estimates. [Pg.337]

In the case of conventional regression, we obtain the following model, with the corresponding standard errors of the parameters, for the straight-line regression ... [Pg.230]

The standard error of the parameters is calculated from the diagonal elements of the variance—cova ria nee matrix in Eq. (6.22) ... [Pg.266]


See other pages where Standard Errors of the Parameters is mentioned: [Pg.288]    [Pg.161]    [Pg.317]    [Pg.37]    [Pg.115]    [Pg.409]    [Pg.22]    [Pg.23]    [Pg.40]    [Pg.63]    [Pg.110]    [Pg.115]    [Pg.119]    [Pg.130]    [Pg.146]    [Pg.159]    [Pg.175]    [Pg.237]    [Pg.244]    [Pg.244]    [Pg.309]    [Pg.546]    [Pg.335]    [Pg.32]    [Pg.75]    [Pg.130]    [Pg.140]    [Pg.422]   


SEARCH



Errors standardization

Standard Error

Standard parameters

The Standards

The parameters

© 2024 chempedia.info