Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Jackknife models

Two non-parametric methods for hypothesis testing with PCA and PLS are cross-validation and the jackknife estimate of variance. Both methods are described in some detail in the sections describing the PCA and PLS algorithms. Cross-validation is used to assess the predictive property of a PCA or a PLS model. The distribution function of the cross-validation test-statistic cvd-sd under the null-hypothesis is not well known. However, for PLS, the distribution of cvd-sd has been empirically determined by computer simulation technique [24] for some particular types of experimental designs. In particular, the discriminant analysis (or ANOVA-like) PLS analysis has been investigated in some detail as well as the situation with Y one-dimensional. This simulation study is referred to for detailed information. However, some tables of the critical values of cvd-sd at the 5 % level are given in Appendix C. [Pg.312]

The jackknife estimate of variance can be used to assess the significance of the weight and loading coefficients in a PLS model. This is a valuable source of information in case interpretation of the parameters is warranted. The weights with jackknifed standard deviations from the BHT example are given in Table 6.5. The weights of the first PLS dimension suggest that the behavioural effect of these doses of BHT is to suppress most aspects of a rat s... [Pg.313]

The jackknife method is based on an idea similar to cross-validation. The calculation of the statistical model is repeated g times holding out 1/gth of the data each time. In the end, each element has been held out once and once only (exactly as in cross-validation). Thus, a number of estimates of each parameter is obtained, one for each calculation round. It has been proposed that the quantity... [Pg.329]

A variety of procedures are available to assess a model s true expected performance split sample validation, cross-validation, jackknifing, and bootstrapping. [Pg.420]

When a model is used for descriptive purposes, goodness-of-ht, reliability, and stability, the components of model evaluation must be assessed. Model evaluation should be done in a manner consistent with the intended application of the PM model. The reliability of the analysis results can be checked by carefully examining diagnostic plots, key parameter estimates, standard errors, case deletion diagnostics (7-9), and/or sensitivity analysis as may seem appropriate. Conhdence intervals (standard errors) for parameters may be checked using nonparametric techniques, such as the jackknife and bootstrapping, or the prohle likelihood method. Model stability to determine whether the covariates in the PM model are those that should be tested for inclusion in the model can be checked using the bootstrap (9). [Pg.226]

Furthermore, when alternative approaches are applied in computing parameter estimates, the question to be addressed here is Do these other approaches yield similar parameter and random effects estimates and conclusions An example of addressing this second point would be estimating the parameters of a population pharmacokinetic (PPK) model by the standard maximum likelihood approach and then confirming the estimates by either constructing the profile likelihood plot (i.e., mapping the objective function), using the bootstrap (4, 9) to estimate 95% confidence intervals, or the jackknife method (7, 26, 27) and bootstrap to estimate standard errors of the estimate (4, 9). When the relative standard errors are small and alternative approaches produce similar results, then we conclude the model is reliable. [Pg.236]

Very often a test population of data is not available or would be prohibitively expensive to obtain. When a test population of data is not possible to obtain, internal validation must be considered. The methods of internal PM model validation include data splitting, resampling techniques (cross-validation and bootstrapping) (9,26-30), and the posterior predictive check (PPC) (31-33). Of note, the jackknife is not considered a model validation technique. The jackknife technique may only be used to correct for bias in parameter estimates, and for the computation of the uncertainty associated with parameter estimation. Cross-validation, bootstrapping, and the posterior predictive check are addressed in detail in Chapter 15. [Pg.237]

The reliability of the parameter estimates can be checked using a nonparametric technique—the jackknife technique (20, 34). The nonlinearity of the statistical model and ill-conditioning of a given problem can produce numerical difficulties and force the estimation algorithm into a false minimum. [Pg.393]

The preciseness of the primary parameters can be estimated from the final fit of the multiexponential function to the data, but they are of doubtful validity if the model is severely nonlinear (35). The preciseness of the secondary parameters (in this case variability) are likely to be even less reliable. Consequently, the results of statistical tests carried out with preciseness estimated from the hnal ht could easily be misleading—thus the need to assess the reliability of model estimates. A possible way of reducing bias in parameter estimates and of calculating realistic variances for them is to subject the data to the jackknife technique (36, 37). The technique requires little by way of assumption or analysis. A naive Student t approximation for the standardized jackknife estimator (34) or the bootstrap (31,38,39) (see Chapter 15 of this text) can be used. [Pg.393]

M. H. Quenouille introduced the jackknife (JKK) in 1949 (12) and it was later popularized by Tukey in 1958, who first used the term (13). Quenouille s motivation was to construct an estimator of bias that would have broad applicability. The JKK has been applied to bias correction, the estimation of variance, and standard error of variables (4,12-16). Thus, for pharmacometrics it has the potential for improving models and has been applied in the assessment of PMM reliability (17). The JKK may not be employed as a method for model validation. [Pg.402]

When data size is not too large, one commonly prefers using the leave-one-out cross validation (Jackknife) method. This means that one data point is picked up for validation and the remaining data points are used for training. This process is repeated until each data point has been validated once. In other words, for a data consisted of n points, n validation models should be performed. [Pg.333]

Duchesne C, MacGregor JF, Jackknife and bootstrap methods in the identification of dynamic models, Journal of Process Control, 2001, 11, 553-564. [Pg.355]

As a last comment, caution should be exercised when fitting small sets of data to both structural and residual variance models. It is commonplace in the literature to fit individual data and then apply a residual variance model to the data. Residual variance models based on small samples are not very robust, which can easily be seen if the data are jackknifed or bootstrapped. One way to overcome this is to assume a common residual variance model for all observations, instead of a residual variance model for each subject. This assumption is not such a leap of faith. For GLS, first fit each subject and then pool the residuals. Use the pooled residuals to estimate the residual variance model parameters and then iterate in this manner until convergence. For ELS, things are a bit trickier but are still doable. [Pg.135]

Related to the bootstrap is the jackknife approach (see the book Appendix for further details and background) of which there are two major variants. The first approach, called the delete-1 approach, removes one subject at a time from the data set to create n-new jackknife data sets. The model is fit to each data set and the parameter pseudovalues are calculated as... [Pg.244]

For large data sets, the delete-1 jackknife may be impractical since it may require fitting hundreds of data sets. A modification of the delete-1 jackknife is the delete 10% jackknife, where 10 different jackknife data sets are created with each data set having a unique 10% of the data removed. Only 10 data sets are modeled using this jackknife modification. All other calculations are as before but n now becomes the number of data sets, not the number of subjects. The use of the jackknife has largely been supplanted by the bootstrap since the jackknife has been criticized as producing standard errors that have poor statistical behavior when the estimator is nonsmooth, e.g., the median, which may not be a valid criticism for pharmacokinetic parameters. But whether one is better than the other at estimating standard errors of continuous functions is debatable and a matter of preference. [Pg.244]

An area related to model validation is influence analysis, which deals with how stable the model parameters are to influential observations (either individual concentration values or individual subjects), and model robustness, which deals with how stable the model parameters are to perturbations in the input data. Influence analysis has been dealt with in previous chapters. The basic idea is to generate a series of new data sets, where each new data set consists of the original data set with one unique subject removed or has a different block of data removed, just like how jackknife data sets are generated. The model is refit to each of the new data sets and how the parameter estimates change with each new data set is determined. Ideally, no subject should show... [Pg.256]

Figure 9.13 Index plots of structural model parameter estimates expressed as percent change from baseline using the delete-1 jackknife. Each patient was assigned an index number ranging from 1 to 78 and then singularly removed from the data set. The delete-1 data set was then used to fit the model in Eq. (9.14) using FOCE-I. Figure 9.13 Index plots of structural model parameter estimates expressed as percent change from baseline using the delete-1 jackknife. Each patient was assigned an index number ranging from 1 to 78 and then singularly removed from the data set. The delete-1 data set was then used to fit the model in Eq. (9.14) using FOCE-I.
Gibiansky, E., Gibiansky, L., and Bramer, S. Comparison of NONMEM, bootstrap, jackknife, and profiling parameter estimates and confidence intervals for the aripiprazole population pharmacokinetic model. Presented at American Association of Pharmaceutical Scientists Annual Meeting, Boston MA, 2001. [Pg.370]

The reader is directed to Appendix II for a review of matrices and application of matrix algebra. Once that is completed, we will look at examples of Studentized and jackknifed residuals applied to data from simple linear regression models and then discuss rescaling of residuals as it applies to model leveraging due to outliers. [Pg.309]

Table F presents corrected jackknife residual values, which essentially are Bonferroni corrections on the jackknife residuals. For example, let a = 0.05, k = the number of bi values in the model, excluding bo say = 1 and n = 20. In this case. Table F shows that a jackknife residual greater than 3.54 in absolute value, r(, ) > 3.54, would be considered an outlier. Table F presents corrected jackknife residual values, which essentially are Bonferroni corrections on the jackknife residuals. For example, let a = 0.05, k = the number of bi values in the model, excluding bo say = 1 and n = 20. In this case. Table F shows that a jackknife residual greater than 3.54 in absolute value, r(, ) > 3.54, would be considered an outlier.
The bottom right Panel shows an alternative representation of the amphiphilic comb-tail diblock copolymer. This continuous monophilic/amphiphilic model looks like a jackknife (a) conventional linear AB diblock copolymer (b) and (c) rotation of the shorter section B around the crankshaft connecting the A and B sections and (d) conformationally asymmetric AC diblock copolymer with A units of the segmental volume and C units of the segmental volume 2. ... [Pg.425]

As mentioned above, there are 2024 possible combinations of three descriptors, so we use a GA to identify the inputs that are likely to yield the greatest predictive accuracy. Use of the GA requires selection of a particular measure of predictive accuracy to decide which models to keep at each cycle. Because we are interested primarily in cross-validated predictions, is a natural choice. However, the structurally based partitioning scheme is less straightforward to automate than a jackknife one. Consequently, for the GNN, we used the Pearson linear correlation coefficient for the jackknife cross-validated outputs (rjck) and subsequently tested each selected combination of descriptors with the structurally based cross-validation scheme (r v). We performed five GNN trials, from each of which we saved the best 20 models. Of these 100 models, 46 were unique, and each of these was subjected to 10 trials with the structurally based cross-validation scheme. [Pg.22]


See other pages where Jackknife models is mentioned: [Pg.554]    [Pg.554]    [Pg.238]    [Pg.81]    [Pg.168]    [Pg.172]    [Pg.532]    [Pg.984]    [Pg.332]    [Pg.116]    [Pg.247]    [Pg.248]    [Pg.341]    [Pg.151]    [Pg.309]    [Pg.253]    [Pg.122]    [Pg.160]    [Pg.169]    [Pg.124]    [Pg.345]    [Pg.6]    [Pg.182]    [Pg.182]   


SEARCH



Jackknife

Jackknifing

© 2024 chempedia.info