Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sampling error nominal data

Quantitative methodology uses large or relatively large samples of subjects (as a rule students) and tests or questionnaires to which the subjects answer. Results are treated by statistical analysis, by means of a variety of parametric methods (when we have continuous data at the interval or at the ratio scale) or nonparametric methods (when we have categorical data at the nominal or at the ordinal scale) (30). Data are usually treated by standard commercial statistical packages. Tests and questionnaires have to satisfy the criteria for content and construct validity (this is analogous to lack of systematic errors in measurement), and for reliability (this controls for random errors) (31). [Pg.79]

In the fixed sample clinical trial approach, one analysis is performed once all of the data have been collected. The chosen nominal significance level (the Type I error rate) will have been stated in the study protocol and/or the statistical analysis plan. This value is likely to be 0.05 As we have seen, declaring a finding statistically significant is typically done at the 5% p-level. In a group sequential clinical trial, the plan is to conduct at least one interim analysis and possibly several of them. This procedure will also be discussed in the trial s study protocol and/or the statistical analysis plan. For example, suppose the plan is to perform a maximum of five analyses (the fifth would have been the only analysis conducted had the trial adopted a fixed sample approach), and it is planned to enroll 1,000 subjects in the trial. The first interim analysis would be conducted after data had been collected for the first fifth of the total sample size, i.e., after 200 subjects. If this analysis provided compelling evidence to terminate the trial, it would be terminated at that point. If compelling evidence to terminate the trial was not obtained, the trial would proceed to the point where two-fifths of the total sample size had been recruited, at which point the second interim analysis would be conducted. All of the accumulated data collected to this point, i.e., the data from all 400 subjects, would be used in this analysis. [Pg.182]

The goodness of different equations fitted to the experimental data points is assessed by the results of statistical analysis or by simply considering the standard error of the estimate. It should be also considered that the adequacy of the calibration function for the determination of correct MW values is also dependent on the quality of the narrow MWD standards. Their nominal MWs are determined with independent absolute methods and are affected by experimental errors which may be different between samples with different MWs, or coming from different producers. A check of the quality of the narrow standards may be obtained by calculating the percent MW deviation of each standard from the calibration curve ... [Pg.254]

Reproducibility It is always necessary to confirm that the (m, T t) data measured is characteristic of the reaction of interest only and that any conclusions reached from these are based on reliable, reproducible observations. Comparisons between successive, nominally identical, experiments (and for different prepared samples of the reactant) enable the accuracy of the methods used to be assessed and meaningful error limits to be attached to reported parameters, e.g., activation energy (Ea). Repeated experiments also allow the identification (and reconsideration) of the occasional inconsistent result (was it because of unrepresentative sample, malfunction of equipment, or a programming bug ). Experiments specifically performed to establish reproducibility are not mentioned in every report, but it is always possible for unanticipated errors to occur. An appropriate... [Pg.163]

They concluded that the Type I error rate for EBE-based methods were near the nominal level (a = 0.05) under most cases. The Type I errors for the NL-based methods were near the nominal level in most cases, but were smaller under sparse data conditions and with small sample sizes. The LRT cases consistently inflated the Type I error rate and that, not surprisingly, the LRT was the most powerful of the methods examined. This latter result can be rationalized as thinking that the inflated Type I error rate acts as a constant to inflate statistical power at nonzero effect sizes. They concluded that the LRT was too liberal for sparse data, while the NL-based methods were too conservative, and that the EBE-based methods were the most reliable for covariate selection. [Pg.240]

In summary, the Type I error rate from using the LRT to test for the inclusion of a covariate in a model was inflated when the data were heteroscedastic and an inappropriate estimation method was used. Type I error rates with FOCE-I were in general near nominal values under most conditions studied and suggest that in most cases FOCE-I should be the estimation method of choice. In contrast, Type I error rates with FO-approximation and FOCE were very dependent on and sensitive to many factors, including number of samples per subject, number of subjects, and how the residual error was defined. The combination of high residual variability with sparse sampling was a particularly disastrous combination using... [Pg.271]

The details of the assessment of stability data are under intense discussion within the scientific community. A majority of laboratories evaluate data with acceptance criteria relative to the nominal concentration of the spiked sample. The rationale for this is that it is not feasible to introduce more stringent criteria for stability evaluations than that of the assay acceptance criterion. Another common approach is to compare data against a baseline concentration (or day zero concentration) of a bulk preparation of stability samples established by repeated analysis, either during the accuracy and precision evaluations, or by other means. This evaluation then eliminates any systematic errors that may have occurred in the preparation of the stability samples. A more statistically acceptable method of stability data evaluations would be to use confidence intervals or perform trend analysis on the data [24]. In this case, when the observed concentration or response of the stability sample is beyond the lower confidence interval (as set a priori), the data indicate a lack of analyte stability under the conditions evaluated. [Pg.102]

For infinite dilution operation the carrier gas flows directly to the column which is inserted into a thermostated oil bath (to get a more precise temperature control than in a conventional GLC oven). The output of the column is measured with a flame ionization detector or alternately with a thermal conductivity detector. Helium is used today as carrier gas (nitrogen in earlier work). From the difference between the retention time of the injected solvent sample and the retention time of a non-interacting gas (marker gas), the thermodynamic equilibrium behavior can be obtained (equations see below). Most experiments were made up to now with packed columns, but capillary columns were used, too. The experimental conditions must be chosen so that real thermodynamic data can be obtained, i.e., equilibrium bulk absorption conditions. Errors caused by unsuitable gas flow rates, unsuitable polymer loading percentages on the solid support material and support surface effects as well as any interactions between the injected sample and the solid support in packed columns, unsuitable sample size of the injected probes, carrier gas effects, and imprecise knowledge of the real amount of polymer in the column, can be sources of problems, whether data are nominally measured under real thermodynamic equilibrium conditions or not, and have to be eliminated. The sizeable pressure drop through the column must be measured and accounted for. [Pg.165]

Fig. 16. Temperature dependence of the Hall carrier number F/cRh for Y, Pr Ba2Cu307 single crystals with different x values. The fact that the data for two pairs of x values (x = 0, x = 0.08, and x = 0.42, x=0.51) are out of sequence in the F/cRh vs. T plot, but are not in the cot( H) vs. P- plot (see fig. 17), could be a result of the error in measuring the thickness of these crystals. The x = 0.9 value represents the nominal composition, while the x values of the superconducting samples were estimated by comparing the T s of the crystals with those of high-quality polycrystalline samples. From Maple et al. (1994). Fig. 16. Temperature dependence of the Hall carrier number F/cRh for Y, Pr Ba2Cu307 single crystals with different x values. The fact that the data for two pairs of x values (x = 0, x = 0.08, and x = 0.42, x=0.51) are out of sequence in the F/cRh vs. T plot, but are not in the cot( H) vs. P- plot (see fig. 17), could be a result of the error in measuring the thickness of these crystals. The x = 0.9 value represents the nominal composition, while the x values of the superconducting samples were estimated by comparing the T s of the crystals with those of high-quality polycrystalline samples. From Maple et al. (1994).

See other pages where Sampling error nominal data is mentioned: [Pg.207]    [Pg.6]    [Pg.294]    [Pg.15]    [Pg.204]    [Pg.237]    [Pg.183]    [Pg.315]    [Pg.296]    [Pg.227]    [Pg.294]    [Pg.28]    [Pg.120]    [Pg.70]    [Pg.674]    [Pg.164]    [Pg.113]    [Pg.136]    [Pg.96]    [Pg.125]    [Pg.175]    [Pg.66]    [Pg.150]    [Pg.186]    [Pg.15]   
See also in sourсe #XX -- [ Pg.198 , Pg.199 , Pg.200 ]




SEARCH



Data sampling

Error sampling

Error, sample

Nominal

Nominal data

Nominalizations

Sampled data

© 2024 chempedia.info