Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Jackknifing error estimates

Although this approach is still used, it is undesirable for statistical reasons error calculations underestimate the true uncertainty associated with the equations (17, 21). A better approach is to use the equations developed for one set of lakes to infer chemistry values from counts of taxa from a second set of lakes (i.e., cross-validation). The extra time and effort required to develop the additional data for the test set is a major limitation to this approach. Computer-intensive techniques, such as jackknifing or bootstrapping, can produce error estimates from the original training set (53), without having to collect data for additional lakes. [Pg.30]

Jackknifing involves removing one sample from the calibration set, deriving the inference equations based on the remaining set of lakes (i.e., n — 1), and then using the new inference equation to derive an inferred value for the one sample that was removed (i.e., providing an independent error estimate). These steps are repeated until all samples have been left out once from the calibration process and used to calculate a new inferred value. The set of new inferred values is then used in conjunction with the... [Pg.30]

Techniques to use for evaluations have been discussed by Cox and Tikvart (42), Hanna (43) and Weil et al. (44). Hanna (45) shows how resampling of evaluation data will allow use of the bootstrap and jackknife techniques so that error bounds can be placed about estimates. [Pg.334]

For a more realistic estimate of the future error one splits the total data set into a training and a prediction part. With the training set the discriminant functions are calculated and with the objects of the prediction or validation set, the error rate is then calculated. If one has insufficient samples for this splitting, other methods of cross-validation are useful, especially the holdout method of LACHENBRUCH [1975] which is also called jackknifing or leaving one out . The last name explains the procedure For every class of objects the discriminant function is developed using all the class mem-... [Pg.186]

Efron B (1981) Nonparametric estimates of standard error The jackknife, the bootstrap and other methods. Biometrika 68 589-599... [Pg.753]

The slope and intercept may be estimated by a nonpara-metric procedure, which is robust to outliers, and requires no assumptions of Gaussian error distributions. Notice, however, that the parametric regression procedures do not presume Gaussian distributions of target values, but only of the error distributions. Furthermore, the jackknife principle used for estimation of standard errors for Deming and... [Pg.388]

SE(flo) and SE b) are the standard errors of the estimated intercept Oo and slope b, respectively. For OLR and WLR, the standard errors are calculated from the formulas presented previously. These formulas also apply approximately for the Deming and weighted Deming procedures. An exact procedure is to apply a computerized resampling principle called the jackknife procedure, which in practice can be carried out... [Pg.389]

When a model is used for descriptive purposes, goodness-of-ht, reliability, and stability, the components of model evaluation must be assessed. Model evaluation should be done in a manner consistent with the intended application of the PM model. The reliability of the analysis results can be checked by carefully examining diagnostic plots, key parameter estimates, standard errors, case deletion diagnostics (7-9), and/or sensitivity analysis as may seem appropriate. Conhdence intervals (standard errors) for parameters may be checked using nonparametric techniques, such as the jackknife and bootstrapping, or the prohle likelihood method. Model stability to determine whether the covariates in the PM model are those that should be tested for inclusion in the model can be checked using the bootstrap (9). [Pg.226]

Furthermore, when alternative approaches are applied in computing parameter estimates, the question to be addressed here is Do these other approaches yield similar parameter and random effects estimates and conclusions An example of addressing this second point would be estimating the parameters of a population pharmacokinetic (PPK) model by the standard maximum likelihood approach and then confirming the estimates by either constructing the profile likelihood plot (i.e., mapping the objective function), using the bootstrap (4, 9) to estimate 95% confidence intervals, or the jackknife method (7, 26, 27) and bootstrap to estimate standard errors of the estimate (4, 9). When the relative standard errors are small and alternative approaches produce similar results, then we conclude the model is reliable. [Pg.236]

M. H. Quenouille introduced the jackknife (JKK) in 1949 (12) and it was later popularized by Tukey in 1958, who first used the term (13). Quenouille s motivation was to construct an estimator of bias that would have broad applicability. The JKK has been applied to bias correction, the estimation of variance, and standard error of variables (4,12-16). Thus, for pharmacometrics it has the potential for improving models and has been applied in the assessment of PMM reliability (17). The JKK may not be employed as a method for model validation. [Pg.402]

For large data sets, the delete-1 jackknife may be impractical since it may require fitting hundreds of data sets. A modification of the delete-1 jackknife is the delete 10% jackknife, where 10 different jackknife data sets are created with each data set having a unique 10% of the data removed. Only 10 data sets are modeled using this jackknife modification. All other calculations are as before but n now becomes the number of data sets, not the number of subjects. The use of the jackknife has largely been supplanted by the bootstrap since the jackknife has been criticized as producing standard errors that have poor statistical behavior when the estimator is nonsmooth, e.g., the median, which may not be a valid criticism for pharmacokinetic parameters. But whether one is better than the other at estimating standard errors of continuous functions is debatable and a matter of preference. [Pg.244]

The square root of is the jackknife standard error of the estimate. This method is called the delete-1 jackknife because a single observation is removed at a time. A modification of this method, called the delete-n jackknife, is to delete chunks of data at a time and then create the pseudovalues after removal of these chunks. [Pg.354]

Because the values of the derivatives off( 0)) are unknown and there are also higher-order bias temis, it is not obvious from Eq. (4.29) how to correct the bias. At least, it is reassuring to realize that it decreases with 1 /M, i.e., for sufficiently large time series, the estimate /(O) is reasonable. We will see later on that one can indeed introduce an estimator that reduces the bias to the order of 1 /Afi. This is made possible by the jackknife resampling method, which will also help us estimate the statistical error. [Pg.90]


See other pages where Jackknifing error estimates is mentioned: [Pg.90]    [Pg.90]    [Pg.83]    [Pg.247]    [Pg.87]    [Pg.448]    [Pg.83]    [Pg.83]    [Pg.386]    [Pg.389]    [Pg.399]    [Pg.82]    [Pg.115]    [Pg.244]    [Pg.246]    [Pg.247]    [Pg.355]    [Pg.27]    [Pg.27]    [Pg.345]   
See also in sourсe #XX -- [ Pg.23 ]




SEARCH



Error estimate

Error estimating

Error estimation

Estimated error

Jackknife

Jackknifing

© 2024 chempedia.info