Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bootstrap bias estimates from

The preciseness of the primary parameters can be estimated from the final fit of the multiexponential function to the data, but they are of doubtful validity if the model is severely nonlinear (35). The preciseness of the secondary parameters (in this case variability) are likely to be even less reliable. Consequently, the results of statistical tests carried out with preciseness estimated from the hnal ht could easily be misleading—thus the need to assess the reliability of model estimates. A possible way of reducing bias in parameter estimates and of calculating realistic variances for them is to subject the data to the jackknife technique (36, 37). The technique requires little by way of assumption or analysis. A naive Student t approximation for the standardized jackknife estimator (34) or the bootstrap (31,38,39) (see Chapter 15 of this text) can be used. [Pg.393]

Bootstrapping is the resampling with replacement method that has the advantage of using the entire data set. It has been demonstrated to be useful in PMM validation (1,3, 22) and has the same advantages as do other internal validation methods in that it obviates the need for collecting data from a test population. Bootstrapping has been applied to population pharmacokinetic (PPK) model development, stability check and evaluation, and bias estimation (1-3, 25). [Pg.406]

Bias corrections are sometimes applied to MLEs (which often have some bias) or other estimates (as explained in the following section, [mean] bias occurs when the mean of the sampling distribution does not equal the parameter to be estimated). A simple bootstrap approach can be used to correct the bias of any estimate (Efron and Tibshirani 1993). A particularly important situation where it is not conventional to use the true MLE is in estimating the variance of a normal distribution. The conventional formula for the sample variance can be written as = SSR/(n - 1) where SSR denotes the sum of squared residuals (observed values, minus mean value) is an unbiased estimator of the variance, whether the data are from a normal distribution... [Pg.35]

It is intuitive that the predictabihty of the dependent variables into the training data set from which a model was estimated will be optimistic, when compared to predicting into an external data set. In such a case, the prediction errors will have a downward bias. Therefore, a method that estimates predictability for external data is needed and this can be executed via the bootstrap. [Pg.410]

Another internal technique used to validate models, one that is quite commonly seen, is the bootstrap and its various modifications, which has been discussed elsewhere in this book. The nonparametric bootstrap, the most common approach, is to generate a series of data sets of size equal to the original data set by resampling with replacement from the observed data set. The final model is fit to each data set and the distribution of the parameter estimates examined for bias and precision. The parametric bootstrap fixes the parameter estimates under the final model and simulates a series of data sets of size equal to the original data set. The final model is fit to each data set and validation approach per se as it only provides information on how well model parameters were estimated. [Pg.255]

Standard deviation, MSE, and bias of all methods. The small k chosen for two-fold CV and split sample with p = is due to the reduced training set size. For r-fold CV, a significant decrease in prediction error, bias, and MSE is seen as v increases from 2 to 10. Tenfold CV has a slightly decreased error estimate compared to LOOCV as well as a smaller standard deviation, bias, and MSE however, the LOOCV k is smaller than that of 10-fold CV. Repeated 5-fold CV decreases the standard deviation and MSE over 5-fold CV however, values for the bias and k are slightly larger. In comparison to 10-fold CV, the 0.632-1- bootstrap has a smaller standard deviation and MSE with a larger prediction error, bias, and k. [Pg.235]

Fig. 8.5 PLS validation plots showing for each predicted variable (i.e., sensory descriptor) the root mean squared error of prediction (RMSEP) over the first five model dimensions. RMSEP values were obtained from a leave-one-out bootstrapping algorithm, and both the cross-validated estimate black solid line) and the bias-adjusted eross-validation estimate ned doited line) are shown [38]... Fig. 8.5 PLS validation plots showing for each predicted variable (i.e., sensory descriptor) the root mean squared error of prediction (RMSEP) over the first five model dimensions. RMSEP values were obtained from a leave-one-out bootstrapping algorithm, and both the cross-validated estimate black solid line) and the bias-adjusted eross-validation estimate ned doited line) are shown [38]...

See other pages where Bootstrap bias estimates from is mentioned: [Pg.243]    [Pg.275]    [Pg.415]    [Pg.242]    [Pg.449]    [Pg.483]    [Pg.119]    [Pg.409]    [Pg.428]    [Pg.436]    [Pg.1049]    [Pg.188]    [Pg.32]    [Pg.298]    [Pg.302]    [Pg.303]   
See also in sourсe #XX -- [ Pg.409 , Pg.410 ]




SEARCH



Biases

Bootstrapping

Estimated from

© 2024 chempedia.info