Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bootstrap nonparametric

Porter PS, Rao ST, Ku J-Y, Poirot RL, Dakins M (1997) Small sample properties of nonparametric bootstrap t confidence intervals. J Air Waste Management Assoc 47 1197-1203 Powell R, Hergt J, Woodhead J (2002) Improving isochron calcttlatiorrs with robust statistics and the bootstrap. Chem Geol 185 191- 204... [Pg.652]

Vertzoni et al. (30) recently clarified the applicability of the similarity factor, the difference factor, and the Rescigno index in the comparison of cumulative data sets. Although all these indices should be used with caution (because inclusion of too many data points in the plateau region will lead to the outcome that the profiles are more similar and because the cutoff time per percentage dissolved is empirically chosen and not based on theory), all can be useful for comparing two cumulative data sets. When the measurement error is low, i.e., the data have low variability, mean profiles can be used and any one of these indices could be used. Selection depends on the nature of the difference one wishes to estimate and the existence of a reference data set. When data are more variable, index evaluation must be done on a confidence interval basis and selection of the appropriate index, depends on the number of the replications per data set in addition to the type of difference one wishes to estimate. When a large number of replications per data set are available (e.g., 12), construction of nonparametric or bootstrap confidence intervals of the similarity factor appears to be the most reliable of the three methods, provided that the plateau level is 100. With a restricted number of replications per data set (e.g., three), any of the three indices can be used, provided either non-parametric or bootstrap confidence intervals are determined (30). [Pg.237]

A nonparametric approach can involve the use of synoptic data sets. In a synoptic data set, each unit is represented by a vector of measurements instead of a single measurement. For example, for synoptic data useful for pesticide fate, assessment could take the form of multiple physical-chemical measurements recorded for each of a sample of water bodies. The multivariate empirical distribution assigns equal probability (1/n) to each of n measurement vectors. Bootstrap evaluation of statistical error can involve sampling sets of n measurement vectors (with replacement). Dependencies are accounted for in such an approach because the variable combinations allowed are precisely those observed in the data, and correlations (or other dependency measures) are fixed equal to sample values. [Pg.46]

One of the most dependably accurate methods for deriving 95% confidence intervals for cost-effectiveness ratios is the nonparametric bootstrap method. In this method, one resamples from the smdy sample and computes cost-effectiveness ratios in each of the multiple samples. To do so requires one to (1) draw a sample of size n with replacement from the empiric distribution and use it to compute a cost-effectiveness ratio (2) repeat this sampling and calculation of the ratio (by convention, at least 1000 times for confidence intervals) (3) order the repeated estimates of the ratio from lowest (best) to highest (worst) and (4) identify a 95% confidence interval from this rank-ordered distribution. The percentile method is one of the simplest means of identifying a confidence interval, but it may not be as accurate as other methods. When using 1,000... [Pg.51]

Efron B (1981) Nonparametric estimates of standard error The jackknife, the bootstrap and other methods. Biometrika 68 589-599... [Pg.753]

The BEST develops an estimate of the total sample population using a small set of known samples. A point estimate of the center of this known population is also calculated. When a new sample is analyzed, its spectrum is projected into the same hyperspace as the known samples. A vector is then formed in hyperspace to connect the center of the population estimate to the new sample spectral point. A hypercylinder is formed about this vector to contain a number of estimated-population spectral points. The density of these points in both directions along the central axis of the hypercylinder is used to construct an asymmetric nonparametric confidence interval. The use of a central 68% confidence interval produces bootstrap distances analogous to standard deviations. [Pg.46]

In a 1988 paper, Lodder and Hieftje used the quantile-BEAST (bootstrap error-adjusted single-sample technique) [77] to assess powder blends. In the study, four benzoic acid derivatives and mixtures were analyzed. The active varied between 0 and 25%. The individual benzoic acid derivatives were classified into clusters using the nonparametric standard deviations (SDs), analogous to SDs in parametric statistics. Ace-tylsalicylic acid was added to the formulations at concentrations of 1 to 20%. All uncontaminated samples were correctly identified. Simulated solid dosage forms containing ratios of the two polymorphs were prepared. They were scanned from 1100 to 2500 nm. The CVs ranged from 0.1 to 0.9%. [Pg.94]

Figure 14-9 DoD plot for comparison of two drug assays nonparametric analysis. A histogram shows the relative frequency of N = 65 differences with demarcated 2.5 and 97.5 percentiles determined nonparametrleally.The 90% CIs of the percentiles are shown. These were derived by the bootstrap technique. Figure 14-9 DoD plot for comparison of two drug assays nonparametric analysis. A histogram shows the relative frequency of N = 65 differences with demarcated 2.5 and 97.5 percentiles determined nonparametrleally.The 90% CIs of the percentiles are shown. These were derived by the bootstrap technique.
Linnet K. Nonparametric estimation of reference intervals by simple and bootstrap-based procedures. Clin Chem 2000 46 867-9. [Pg.406]

The interpercentile interval can be determined by parametric, nonparametric, and bootstrap statistical techniques. ... [Pg.435]

Originally a simple nonparametric method for determination of percentiles was recommended by the IFCC. However, the newer bootstrap method is currently the best method available for determination of reference limits. The more complex parametric method is seldom necessary, but it will also be presented here owing to its popularity and frequent misapplication. When we compare the results obtained by these methods, we usually find that the estimates of the percentiles are very similar. Detailed descriptions of these methods are given later in this chapter. [Pg.435]

Nonparametric, parametric, and bootstrap methods are used to determme reference intervals. [Pg.437]

Bootstrap-based methods are reliable for estimating reference intervals. The following version using the rank-based nonparametric method is simple and reliable ... [Pg.442]

When a model is used for descriptive purposes, goodness-of-ht, reliability, and stability, the components of model evaluation must be assessed. Model evaluation should be done in a manner consistent with the intended application of the PM model. The reliability of the analysis results can be checked by carefully examining diagnostic plots, key parameter estimates, standard errors, case deletion diagnostics (7-9), and/or sensitivity analysis as may seem appropriate. Conhdence intervals (standard errors) for parameters may be checked using nonparametric techniques, such as the jackknife and bootstrapping, or the prohle likelihood method. Model stability to determine whether the covariates in the PM model are those that should be tested for inclusion in the model can be checked using the bootstrap (9). [Pg.226]

Step 2. Generate 100 bootstrap samples, each having the same sample size as the original data set, using nonparametric bootstrap. [Pg.392]

The nonparametric maximum likelihood (NPML) method is a nonparametric bootstrap because F is the nonparametric estimate of F (14). The NPML concept states that given a set of unknown terms and a set of data related to the unknowns, the best estimate of the unknowns consists of the values that render the set of data most probable. A schematic representation of the bootstrap is presented in Figure 15.2. [Pg.406]

For the parametric bootstrap instead of resampling with replacement from the data, one constructs B samples of size n, drawing from the parametric estimate of Bpar. Here Fj,ar is the parametric distribution of F. The procedures of interest are then applied to the B samples in the same manner as for the nonparametric bootstrap. [Pg.407]

To execute the nonparametric bootstrap SE (SEb) the following process must... [Pg.408]

One hundred nonparametric bootstrap data sets were generated. [Pg.411]

Note 505 bootstrap samples were generated for convenience. For standard nonparametric bootstrap, 200 replicates is adequate. [Pg.415]

Standard nonparametric bootstrap as there were insufficient replicates to employ the percentile method. [Pg.416]

How to Bootstrap. First, the number of subjects in a multistudy data set for the purposes presented needs to be kept constant to maintain the correct statistical interpretations of bootstrap, that is, correctly representing the underlying empirical distribution of the study populations. Second, the nonparametric bootstrap, as opposed to some other more parametric alternatives, was considered more suitable in order to minimize the dependence on having assumed the correct structural model. [Pg.428]

The Cl is [-0.144, -0.108] and does not contain zero, supporting the notion that the two elimination rate constants do differ. An alternative approach to the above would be to replace the Wald based confidence intervals with those produced using the nonparametric bootstrap technique. With this technique the data set is sampled with replacement at the subject level many times, and the model is fit to each of these resampled data sets, generating an empirical distribution for each model parameter. Confidence intervals can then be constructed for the model parameters based on the percentiles of their empirical distributions. [Pg.734]

The nonparametric bootstrap using FO-approximation on 1000 bootstrapped data sets and... [Pg.245]

Another internal technique used to validate models, one that is quite commonly seen, is the bootstrap and its various modifications, which has been discussed elsewhere in this book. The nonparametric bootstrap, the most common approach, is to generate a series of data sets of size equal to the original data set by resampling with replacement from the observed data set. The final model is fit to each data set and the distribution of the parameter estimates examined for bias and precision. The parametric bootstrap fixes the parameter estimates under the final model and simulates a series of data sets of size equal to the original data set. The final model is fit to each data set and validation approach per se as it only provides information on how well model parameters were estimated. [Pg.255]

Consider the univariate case where a random variable X is measured n-times and some statistic f(x) is calculated from the sample vector X. In its most basic form, the nonparametric bootstrap is done as follows ... [Pg.355]

So for example, if p = 0.60 and B = 1000 then the lower and upper 95% Cl shifts from the 25th and 975th observation to the 73rd and 993rd observation, respectively. The nonlinear transformation of the Z-distribution affects the upper and lower values differentially. The bias-corrected method offers the same advantages as the percentile method but offers better coverage if the bootstrap distribution is asymmetrical. The bias-corrected method is not a true nonparametric method because it makes use of a monotonic transformation that results in a normal distribution centered on f(x). If b = 0 then the bias-corrected method results are the same as the percentile method. [Pg.356]

Several procedures are available that evaluate the phylogenetic signal in the data and the robustness of trees (Swofford et al., 1996 Li, 1997). The most popular of the former class are tests of data signal versus randomized data (skewness and permutation tests). The latter class includes tests of tree support from resampling of observed data (nonparametric bootstrap). The likelihood ratio test provides a means of evaluating both the substitution model and the tree. [Pg.346]

PAUP performs the nonparametric bootstrap for distance, MP, and ML, using all options available for tree building with these methods. When a bootstrap or jackknife with MP is under way, MAXTREES should be set between 10 and no more than... [Pg.352]

Both nonparametric and parametric bootstrap approaches can be pursued depending on whether we are willing to assume we know the true form of the distribution of the observed sample (parametric case). The parametric bootstrap is particularly useful when the sample statistic of interest is highly complex (as one might expect when trying to bootstrap a pharmacokinetic parameter derived from a nonlinear mixed effect model) or when we happen to know the distribution, since the additional assumption of a known distribution adds power to the estimate. [Pg.340]

The nonparametric bootstrap is useful when distributions cannot be assumed as true or when the sampled statistic is based on few observations. In this setting, an observed data set, for example, Zj,. .., Z , where X could be vector-valued (i.e., concentrations at fixed sampling times) can be summarized in the usual way by a mean, median, and variance. An approximate sampling distribution can be obtained drawing a sample of the same size as the original sample from the original data with replacement, for example, Z/,. .., Z , where i is the index of the bootstrap sample... [Pg.340]


See other pages where Bootstrap nonparametric is mentioned: [Pg.2792]    [Pg.54]    [Pg.100]    [Pg.372]    [Pg.372]    [Pg.442]    [Pg.249]    [Pg.405]    [Pg.477]    [Pg.834]    [Pg.836]    [Pg.355]    [Pg.356]    [Pg.357]    [Pg.360]    [Pg.362]    [Pg.348]    [Pg.76]   
See also in sourсe #XX -- [ Pg.405 ]




SEARCH



Bootstrap analysis nonparametric

Bootstrapping

Nonparametric

Nonparametric bootstrap, statistical

© 2024 chempedia.info