Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Jackknife method

A second possibility is to use some estimate of the variance of the loadings. This can be done by the jackknife method due to Quenouille and Tukey (see [37]) or by Efron s bootstrap method [38] (the colourful terminology stems from the expressions jack of all trades and master of none and lifting yourself up by your own bootstraps ). The use of the bootstrap to estimate the variance of the loadings in PCA has been described [39] and will not be elaborated upon further. The jackknife method is used partly because it is a natural side-product of the cross-validation and therefore computationally non-demanding and partly because the jackknife estimate of variance is used later on in conjunction with PLS. [Pg.329]

The jackknife method is based on an idea similar to cross-validation. The calculation of the statistical model is repeated g times holding out 1/gth of the data each time. In the end, each element has been held out once and once only (exactly as in cross-validation). Thus, a number of estimates of each parameter is obtained, one for each calculation round. It has been proposed that the quantity... [Pg.329]

Furthermore, when alternative approaches are applied in computing parameter estimates, the question to be addressed here is Do these other approaches yield similar parameter and random effects estimates and conclusions An example of addressing this second point would be estimating the parameters of a population pharmacokinetic (PPK) model by the standard maximum likelihood approach and then confirming the estimates by either constructing the profile likelihood plot (i.e., mapping the objective function), using the bootstrap (4, 9) to estimate 95% confidence intervals, or the jackknife method (7, 26, 27) and bootstrap to estimate standard errors of the estimate (4, 9). When the relative standard errors are small and alternative approaches produce similar results, then we conclude the model is reliable. [Pg.236]

When data size is not too large, one commonly prefers using the leave-one-out cross validation (Jackknife) method. This means that one data point is picked up for validation and the remaining data points are used for training. This process is repeated until each data point has been validated once. In other words, for a data consisted of n points, n validation models should be performed. [Pg.333]

The most important statistical parameters r, s, and F and the 95% confidence intervals of the regression coefficients are calculated by Eqs. (20) to (23) (for details on Eqs. (20) to (23), see Refs. 39 to 42). For more details on linear (multiple) regression analysis and the calculation of different statistical parameters, as well as other validation techniques (e.g., the jackknife method and bootstrapping), see Refs. 33,39 12 ... [Pg.546]

Dorfman DD.Berbaum KS.Metz CE (1992) Receiver operating characteristic rating analysis. Generalization to the population of readers and patients with the jackknife method. Invest Radiol 27 723-731... [Pg.103]

A final mention of data on real systems is worth, although this paper is not concerned with comparisons with experimental data. It is however interesting to mention that by comparing the experimental data on S.Cerevisiae with the theoretical distribution of Eq. 1, using the jackknife method [16,17], it is possible to loeate the k parameter in the 95 % confidence interval [0.84, 0.93]. Moreover, an analysis of the same data using Bayes factors [18,19], leads to reject the hypothesis that the network is precisely critical, since the probability that X = 1 given the data is smaller than 10 under a broad range of prior distributions. The interested reader is referred to [15] for further details. [Pg.37]

In a sense, the jackknife method is complementary to the binning method. Instead of calculating the average of an observable O in a single bin, the jackknife uses the data of all bins except one. We therefore define the Mi individual jackknife average by taking all data, reduced by those in the Mh. bin ... [Pg.90]

Error bars for the numerical value of l/(Knm> were calculated using the Jackknife method and found to be negligible. [Pg.293]

The second method, the leave-one-at-a-time or jackknife procedure, repeats the whole LDA procedure as many times as there are objects, and each time one object alone is the evaluation set. [Pg.116]

Two non-parametric methods for hypothesis testing with PCA and PLS are cross-validation and the jackknife estimate of variance. Both methods are described in some detail in the sections describing the PCA and PLS algorithms. Cross-validation is used to assess the predictive property of a PCA or a PLS model. The distribution function of the cross-validation test-statistic cvd-sd under the null-hypothesis is not well known. However, for PLS, the distribution of cvd-sd has been empirically determined by computer simulation technique [24] for some particular types of experimental designs. In particular, the discriminant analysis (or ANOVA-like) PLS analysis has been investigated in some detail as well as the situation with Y one-dimensional. This simulation study is referred to for detailed information. However, some tables of the critical values of cvd-sd at the 5 % level are given in Appendix C. [Pg.312]

For a more realistic estimate of the future error one splits the total data set into a training and a prediction part. With the training set the discriminant functions are calculated and with the objects of the prediction or validation set, the error rate is then calculated. If one has insufficient samples for this splitting, other methods of cross-validation are useful, especially the holdout method of LACHENBRUCH [1975] which is also called jackknifing or leaving one out . The last name explains the procedure For every class of objects the discriminant function is developed using all the class mem-... [Pg.186]

Efron B (1981) Nonparametric estimates of standard error The jackknife, the bootstrap and other methods. Biometrika 68 589-599... [Pg.753]

Wu CFJ. Jackknife, bootstrap and other resampling methods in regression analysis (with discussion). Ann Stat 1986 14 1261-95. [Pg.407]

When a model is used for descriptive purposes, goodness-of-ht, reliability, and stability, the components of model evaluation must be assessed. Model evaluation should be done in a manner consistent with the intended application of the PM model. The reliability of the analysis results can be checked by carefully examining diagnostic plots, key parameter estimates, standard errors, case deletion diagnostics (7-9), and/or sensitivity analysis as may seem appropriate. Conhdence intervals (standard errors) for parameters may be checked using nonparametric techniques, such as the jackknife and bootstrapping, or the prohle likelihood method. Model stability to determine whether the covariates in the PM model are those that should be tested for inclusion in the model can be checked using the bootstrap (9). [Pg.226]

Very often a test population of data is not available or would be prohibitively expensive to obtain. When a test population of data is not possible to obtain, internal validation must be considered. The methods of internal PM model validation include data splitting, resampling techniques (cross-validation and bootstrapping) (9,26-30), and the posterior predictive check (PPC) (31-33). Of note, the jackknife is not considered a model validation technique. The jackknife technique may only be used to correct for bias in parameter estimates, and for the computation of the uncertainty associated with parameter estimation. Cross-validation, bootstrapping, and the posterior predictive check are addressed in detail in Chapter 15. [Pg.237]

B. Efron, Bootstrap methods another look at the jackknife. Ann Stat 7 1-26 (1979). [Pg.244]

Jackknife (IKK), cross-validation, and the bootstrap are the methods referred to as resampling techniques. Though not strictly classified as a resampling technique, the posterior predictive check is also covered in this chapter, as it has several characteristics that are similar to resampling methods. [Pg.401]

M. H. Quenouille introduced the jackknife (JKK) in 1949 (12) and it was later popularized by Tukey in 1958, who first used the term (13). Quenouille s motivation was to construct an estimator of bias that would have broad applicability. The JKK has been applied to bias correction, the estimation of variance, and standard error of variables (4,12-16). Thus, for pharmacometrics it has the potential for improving models and has been applied in the assessment of PMM reliability (17). The JKK may not be employed as a method for model validation. [Pg.402]

Cross-validation is an internal resampling method much like the older Jackknife and Bootstrap methods [Efron 1982, Efron Gong 1983, Efron Tibshirani 1993, Wehrens et al. 2000]. The principle of cross-validation goes back to Stone [ 1974] and Geisser [ 1974] and the basic idea is simple ... [Pg.148]

Duchesne C, MacGregor JF, Jackknife and bootstrap methods in the identification of dynamic models, Journal of Process Control, 2001, 11, 553-564. [Pg.355]


See other pages where Jackknife method is mentioned: [Pg.238]    [Pg.483]    [Pg.83]    [Pg.83]    [Pg.119]    [Pg.122]    [Pg.27]    [Pg.92]    [Pg.93]    [Pg.292]    [Pg.238]    [Pg.483]    [Pg.83]    [Pg.83]    [Pg.119]    [Pg.122]    [Pg.27]    [Pg.92]    [Pg.93]    [Pg.292]    [Pg.352]    [Pg.275]    [Pg.448]    [Pg.131]    [Pg.168]    [Pg.83]    [Pg.84]    [Pg.386]    [Pg.389]    [Pg.399]    [Pg.402]    [Pg.481]    [Pg.984]    [Pg.332]   
See also in sourсe #XX -- [ Pg.238 ]

See also in sourсe #XX -- [ Pg.11 , Pg.229 ]

See also in sourсe #XX -- [ Pg.27 ]




SEARCH



Jackknife

Jackknifing

© 2024 chempedia.info