Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Jackknifing technique

Techniques to use for evaluations have been discussed by Cox and Tikvart (42), Hanna (43) and Weil et al. (44). Hanna (45) shows how resampling of evaluation data will allow use of the bootstrap and jackknife techniques so that error bounds can be placed about estimates. [Pg.334]

Very often a test population of data is not available or would be prohibitively expensive to obtain. When a test population of data is not possible to obtain, internal validation must be considered. The methods of internal PM model validation include data splitting, resampling techniques (cross-validation and bootstrapping) (9,26-30), and the posterior predictive check (PPC) (31-33). Of note, the jackknife is not considered a model validation technique. The jackknife technique may only be used to correct for bias in parameter estimates, and for the computation of the uncertainty associated with parameter estimation. Cross-validation, bootstrapping, and the posterior predictive check are addressed in detail in Chapter 15. [Pg.237]

The reliability of the parameter estimates can be checked using a nonparametric technique—the jackknife technique (20, 34). The nonlinearity of the statistical model and ill-conditioning of a given problem can produce numerical difficulties and force the estimation algorithm into a false minimum. [Pg.393]

The preciseness of the primary parameters can be estimated from the final fit of the multiexponential function to the data, but they are of doubtful validity if the model is severely nonlinear (35). The preciseness of the secondary parameters (in this case variability) are likely to be even less reliable. Consequently, the results of statistical tests carried out with preciseness estimated from the hnal ht could easily be misleading—thus the need to assess the reliability of model estimates. A possible way of reducing bias in parameter estimates and of calculating realistic variances for them is to subject the data to the jackknife technique (36, 37). The technique requires little by way of assumption or analysis. A naive Student t approximation for the standardized jackknife estimator (34) or the bootstrap (31,38,39) (see Chapter 15 of this text) can be used. [Pg.393]

Efron B (1982) The jackknife, the bootstrap and other resampling techniques. Society for Industrial and Applied Mathematics, Philadelphia, PA... [Pg.199]

Although this approach is still used, it is undesirable for statistical reasons error calculations underestimate the true uncertainty associated with the equations (17, 21). A better approach is to use the equations developed for one set of lakes to infer chemistry values from counts of taxa from a second set of lakes (i.e., cross-validation). The extra time and effort required to develop the additional data for the test set is a major limitation to this approach. Computer-intensive techniques, such as jackknifing or bootstrapping, can produce error estimates from the original training set (53), without having to collect data for additional lakes. [Pg.30]

Two non-parametric methods for hypothesis testing with PCA and PLS are cross-validation and the jackknife estimate of variance. Both methods are described in some detail in the sections describing the PCA and PLS algorithms. Cross-validation is used to assess the predictive property of a PCA or a PLS model. The distribution function of the cross-validation test-statistic cvd-sd under the null-hypothesis is not well known. However, for PLS, the distribution of cvd-sd has been empirically determined by computer simulation technique [24] for some particular types of experimental designs. In particular, the discriminant analysis (or ANOVA-like) PLS analysis has been investigated in some detail as well as the situation with Y one-dimensional. This simulation study is referred to for detailed information. However, some tables of the critical values of cvd-sd at the 5 % level are given in Appendix C. [Pg.312]

When a model is used for descriptive purposes, goodness-of-ht, reliability, and stability, the components of model evaluation must be assessed. Model evaluation should be done in a manner consistent with the intended application of the PM model. The reliability of the analysis results can be checked by carefully examining diagnostic plots, key parameter estimates, standard errors, case deletion diagnostics (7-9), and/or sensitivity analysis as may seem appropriate. Conhdence intervals (standard errors) for parameters may be checked using nonparametric techniques, such as the jackknife and bootstrapping, or the prohle likelihood method. Model stability to determine whether the covariates in the PM model are those that should be tested for inclusion in the model can be checked using the bootstrap (9). [Pg.226]

Jackknife (IKK), cross-validation, and the bootstrap are the methods referred to as resampling techniques. Though not strictly classified as a resampling technique, the posterior predictive check is also covered in this chapter, as it has several characteristics that are similar to resampling methods. [Pg.401]

The most important statistical parameters r, s, and F and the 95% confidence intervals of the regression coefficients are calculated by Eqs. (20) to (23) (for details on Eqs. (20) to (23), see Refs. 39 to 42). For more details on linear (multiple) regression analysis and the calculation of different statistical parameters, as well as other validation techniques (e.g., the jackknife method and bootstrapping), see Refs. 33,39 12 ... [Pg.546]


See other pages where Jackknifing technique is mentioned: [Pg.131]    [Pg.131]    [Pg.275]    [Pg.121]    [Pg.81]    [Pg.275]    [Pg.172]    [Pg.481]    [Pg.332]    [Pg.248]    [Pg.693]    [Pg.275]    [Pg.51]   
See also in sourсe #XX -- [ Pg.23 ]




SEARCH



Jackknife

Jackknifing

© 2024 chempedia.info