Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bootstrap standard estimates from

The preciseness of the primary parameters can be estimated from the final fit of the multiexponential function to the data, but they are of doubtful validity if the model is severely nonlinear (35). The preciseness of the secondary parameters (in this case variability) are likely to be even less reliable. Consequently, the results of statistical tests carried out with preciseness estimated from the hnal ht could easily be misleading—thus the need to assess the reliability of model estimates. A possible way of reducing bias in parameter estimates and of calculating realistic variances for them is to subject the data to the jackknife technique (36, 37). The technique requires little by way of assumption or analysis. A naive Student t approximation for the standardized jackknife estimator (34) or the bootstrap (31,38,39) (see Chapter 15 of this text) can be used. [Pg.393]

Estimate the SE of the parameter of interest by the sample standard deviation from B bootstrap samples ... [Pg.408]

An approach that is sometimes helpful, particularly for recent pesticide risk assessments, is to use the parameter values that result in best fit (in the sense of LS), comparing the fitted cdf to the cdf of the empirical distribution. In some cases, such as when fitting a log-normal distribution, formulae from linear regression can be used after transformations are applied to linearize the cdf. In other cases, the residual SS is minimized using numerical optimization, i.e., one uses nonlinear regression. This approach seems reasonable for point estimation. However, the statistical assumptions that would often be invoked to justify LS regression will not be met in this application. Therefore the use of any additional regression results (beyond the point estimates) is questionable. If there is a need to provide standard errors or confidence intervals for the estimates, bootstrap procedures are recommended. [Pg.43]

The smoothed bootstrap has been proposed to deal with the discreteness of the empirical distribution function (F) when there are small sample sizes (A < 15). For this approach one must smooth the empirical distribution function and then bootstrap samples are drawn from the smoothed empirical distribution function, for example, from a kernel density estimate. However, it is evident that the proper selection of the smoothing parameter (h) is important so that oversmoothing or undersmoothing does not occur. It is difficult to know the most appropriate value for h and once the value for h is assigned it influences the variability and thus makes characterizing the variability terms of the model impossible. There are few studies where the smoothed bootstrap has been applied (21,27,28). In one such study the improvement in the correlation coefficient when compared to the standard non-parametric bootstrap was modest (21). Therefore, the value and behavior of the smoothed bootstrap are not clear. [Pg.407]

The bootstrap is a very useful procedure when one wishes to estimate the standard error (SE) of a parameter (9) from an unknown probability distribution (F). The original introduction of the bootstrap was for the purpose of estimating the... [Pg.408]

The confidence intervals were constructed from bootstrap runs that included 108 runs with failed covariance that is, NONMEM was unable to generate standard errors of parameter estimates. Arguments could be made to include or exclude these runs in the analysis. Excluding these runs did not result in noticeable change of the results (i.e., changes on the confidence bounds <0.0005). Note also that a successful implementation of the NONMEM covariance step has no influence on the estimation of the geometric mean parameters. In retrospect, the analysis plan should prespecify whether such runs would be included, for the sake of rigorousness. [Pg.437]

Standard deviation, MSE, and bias of all methods. The small k chosen for two-fold CV and split sample with p = is due to the reduced training set size. For r-fold CV, a significant decrease in prediction error, bias, and MSE is seen as v increases from 2 to 10. Tenfold CV has a slightly decreased error estimate compared to LOOCV as well as a smaller standard deviation, bias, and MSE however, the LOOCV k is smaller than that of 10-fold CV. Repeated 5-fold CV decreases the standard deviation and MSE over 5-fold CV however, values for the bias and k are slightly larger. In comparison to 10-fold CV, the 0.632-1- bootstrap has a smaller standard deviation and MSE with a larger prediction error, bias, and k. [Pg.235]

Analytical Methods Committee. 2001. The Bootstrap A Simple Approach to Estimating Standard Errors and Confidence Intervals when Theory Fails, also obtainable from www.rsc.org. (Shows how to write a Minitab macro for the bootstrap calculation.)... [Pg.179]


See other pages where Bootstrap standard estimates from is mentioned: [Pg.372]    [Pg.408]    [Pg.106]    [Pg.356]    [Pg.242]    [Pg.301]    [Pg.179]    [Pg.119]    [Pg.321]    [Pg.2792]    [Pg.409]    [Pg.428]    [Pg.436]    [Pg.754]    [Pg.984]    [Pg.244]    [Pg.247]    [Pg.356]    [Pg.327]    [Pg.249]    [Pg.1657]    [Pg.330]    [Pg.330]   
See also in sourсe #XX -- [ Pg.408 ]




SEARCH



Bootstrap standard

Bootstrapping

Estimated from

© 2024 chempedia.info