Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parametric bootstrap

Vertzoni et al. (30) recently clarified the applicability of the similarity factor, the difference factor, and the Rescigno index in the comparison of cumulative data sets. Although all these indices should be used with caution (because inclusion of too many data points in the plateau region will lead to the outcome that the profiles are more similar and because the cutoff time per percentage dissolved is empirically chosen and not based on theory), all can be useful for comparing two cumulative data sets. When the measurement error is low, i.e., the data have low variability, mean profiles can be used and any one of these indices could be used. Selection depends on the nature of the difference one wishes to estimate and the existence of a reference data set. When data are more variable, index evaluation must be done on a confidence interval basis and selection of the appropriate index, depends on the number of the replications per data set in addition to the type of difference one wishes to estimate. When a large number of replications per data set are available (e.g., 12), construction of nonparametric or bootstrap confidence intervals of the similarity factor appears to be the most reliable of the three methods, provided that the plateau level is 100. With a restricted number of replications per data set (e.g., three), any of the three indices can be used, provided either non-parametric or bootstrap confidence intervals are determined (30). [Pg.237]

Efron, B. (1981) Non parametric estimates of standard error the jack-knife, the bootstrap and other methods, Biometrika 68, 589-599. [Pg.112]

There are often data sets used to estimate distributions of model inputs for which a portion of data are missing because attempts at measurement were below the detection limit of the measurement instrument. These data sets are said to be censored. Commonly used methods for dealing with such data sets are statistically biased. An example includes replacing non-detected values with one half of the detection limit. Such methods cause biased estimates of the mean and do not provide insight regarding the population distribution from which the measured data are a sample. Statistical methods can be used to make inferences regarding both the observed and unobserved (censored) portions of an empirical data set. For example, maximum likelihood estimation can be used to fit parametric distributions to censored data sets, including the portion of the distribution that is below one or more detection limits. Asymptotically unbiased estimates of statistics, such as the mean, can be estimated based upon the fitted distribution. Bootstrap simulation can be used to estimate uncertainty in the statistics of the fitted distribution (e.g. Zhao Frey, 2004). Imputation methods, such as... [Pg.50]

In a 1988 paper, Lodder and Hieftje used the quantile-BEAST (bootstrap error-adjusted single-sample technique) [77] to assess powder blends. In the study, four benzoic acid derivatives and mixtures were analyzed. The active varied between 0 and 25%. The individual benzoic acid derivatives were classified into clusters using the nonparametric standard deviations (SDs), analogous to SDs in parametric statistics. Ace-tylsalicylic acid was added to the formulations at concentrations of 1 to 20%. All uncontaminated samples were correctly identified. Simulated solid dosage forms containing ratios of the two polymorphs were prepared. They were scanned from 1100 to 2500 nm. The CVs ranged from 0.1 to 0.9%. [Pg.94]

The interpercentile interval can be determined by parametric, nonparametric, and bootstrap statistical techniques. ... [Pg.435]

Originally a simple nonparametric method for determination of percentiles was recommended by the IFCC. However, the newer bootstrap method is currently the best method available for determination of reference limits. The more complex parametric method is seldom necessary, but it will also be presented here owing to its popularity and frequent misapplication. When we compare the results obtained by these methods, we usually find that the estimates of the percentiles are very similar. Detailed descriptions of these methods are given later in this chapter. [Pg.435]

Nonparametric, parametric, and bootstrap methods are used to determme reference intervals. [Pg.437]

The smoothed bootstrap has been proposed to deal with the discreteness of the empirical distribution function (F) when there are small sample sizes (A < 15). For this approach one must smooth the empirical distribution function and then bootstrap samples are drawn from the smoothed empirical distribution function, for example, from a kernel density estimate. However, it is evident that the proper selection of the smoothing parameter (h) is important so that oversmoothing or undersmoothing does not occur. It is difficult to know the most appropriate value for h and once the value for h is assigned it influences the variability and thus makes characterizing the variability terms of the model impossible. There are few studies where the smoothed bootstrap has been applied (21,27,28). In one such study the improvement in the correlation coefficient when compared to the standard non-parametric bootstrap was modest (21). Therefore, the value and behavior of the smoothed bootstrap are not clear. [Pg.407]

For the smoothed bootstrap the shape of the distribution is not assumed. However, if one assumes F to be continuous and smooth, then the next step is to assume that it has a parametric form. If one assumes that F has a parametric form such as the Gaussian distribution, then the appropriate estimator for F would be a Gaussian distribution. [Pg.407]

For the parametric bootstrap instead of resampling with replacement from the data, one constructs B samples of size n, drawing from the parametric estimate of Bpar. Here Fj,ar is the parametric distribution of F. The procedures of interest are then applied to the B samples in the same manner as for the nonparametric bootstrap. [Pg.407]

However, in parametric problems the bootstrap adds little or nothing to the theory or application and one cannot explain why the typical approach to estimating parameters via formulas should be replaced by bootstrap estimates. Consequently, it is uncommon to see the parametric bootstrap used in real problems. When applied to population pharmacometric (PPM) modeling, a weakness of the parametric bootstrap is that it assumes that the model is known with a high degree of certainty. This is seldom true. [Pg.408]

How to Bootstrap. First, the number of subjects in a multistudy data set for the purposes presented needs to be kept constant to maintain the correct statistical interpretations of bootstrap, that is, correctly representing the underlying empirical distribution of the study populations. Second, the nonparametric bootstrap, as opposed to some other more parametric alternatives, was considered more suitable in order to minimize the dependence on having assumed the correct structural model. [Pg.428]

One method to estimate standard errors is the non-parametric bootstrap (see the book Appendix for further details and background). With this method, subjects are repeatedly sampled with replacement creating a new data set of the same size as the original dataset. For example, if the data set had 100 subjects with subjects numbered 1,2,..., 100. The first bootstrap data set may... [Pg.243]

Another internal technique used to validate models, one that is quite commonly seen, is the bootstrap and its various modifications, which has been discussed elsewhere in this book. The nonparametric bootstrap, the most common approach, is to generate a series of data sets of size equal to the original data set by resampling with replacement from the observed data set. The final model is fit to each data set and the distribution of the parameter estimates examined for bias and precision. The parametric bootstrap fixes the parameter estimates under the final model and simulates a series of data sets of size equal to the original data set. The final model is fit to each data set and validation approach per se as it only provides information on how well model parameters were estimated. [Pg.255]

Sometimes, the distribution of the statistic must be derived under asymptotic or best case conditions, which assume an infinite number of observations, like the sampling distribution for a regression parameter which assumes a normal distribution. However, the asymptotic assumption of normality is not always valid. Further, sometimes the distribution of the statistic may not be known at all. For example, what is the sampling distribution for the ratio of the largest to smallest value in some distribution Parametric theory is not entirely forthcoming with an answer. The bootstrap and jackknife, which are two types of computer intensive analysis methods, could be used to assess the precision of a sample-derived statistic when its sampling distribution is unknown or when asymptotic theory may not be appropriate. [Pg.354]

Table 2 Percent coverage of parametric and bootstrap confidence intervals using various distributions and sample sizes. Table 2 Percent coverage of parametric and bootstrap confidence intervals using various distributions and sample sizes.
Dilleen, M., Heimann, G., and Hirsch, I. Non-parametric estimators of a monotonic dose-response curve and bootstrap confidence intervals. Statistics in Medicine 2003 22 869-882. [Pg.368]

To our knowledge, Torrence and Compo [19] were the first to establish significance tests for wavelet spectral measures. They assumed a reasonable background spectrum for the null hypothesis and tested for every point in the time/scale plane separately (i.e. pointwise) whether the power exceeded a certain critical value corresponding to the chosen significance level. Since the critical values of the background model are difficult to be accessed analytically [11], they need to be estimated based on a parametric bootstrap ... [Pg.336]

Huelsenbeck, J. P, Hillis, D. M., and Jones, R. (1996). Parametric bootstrapping in molecular phylogenetics. In Molecular Zoology Advances, Strategies, and Protocols, J. D. Ferraris and S. R. Palumbi, Eds. (New York Wiley-Liss), p. 19-45. [Pg.357]

Obtain posterior distributions of all fixed and random parameters, which typically requires a full Bayesian method as with POPKAN or BUGS, but can be accomplished with NONMEM under certain conditions and assumptions. Alternatively, posterior distributions can be simulated using a parametric bootstrap. [Pg.338]

Both nonparametric and parametric bootstrap approaches can be pursued depending on whether we are willing to assume we know the true form of the distribution of the observed sample (parametric case). The parametric bootstrap is particularly useful when the sample statistic of interest is highly complex (as one might expect when trying to bootstrap a pharmacokinetic parameter derived from a nonlinear mixed effect model) or when we happen to know the distribution, since the additional assumption of a known distribution adds power to the estimate. [Pg.340]

Bootstrap resampling provides two approaches nonparametric and parametric bootstrap samples. In both we begin with a mle or procedure fitted to n independent... [Pg.236]


See other pages where Parametric bootstrap is mentioned: [Pg.64]    [Pg.50]    [Pg.119]    [Pg.3632]    [Pg.2792]    [Pg.372]    [Pg.442]    [Pg.249]    [Pg.405]    [Pg.407]    [Pg.477]    [Pg.834]    [Pg.355]    [Pg.356]    [Pg.357]    [Pg.357]    [Pg.358]    [Pg.358]    [Pg.360]    [Pg.363]    [Pg.348]    [Pg.348]    [Pg.340]    [Pg.76]    [Pg.237]    [Pg.238]    [Pg.238]   
See also in sourсe #XX -- [ Pg.407 ]




SEARCH



Bootstrapping

Parametric

Parametrization

© 2024 chempedia.info