Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sampling statistical criterion

The most reliable method of diagnosing UTI is by qnantitative mine cultme. Urine in the bladder is normally sterile, making it statistically possible to differentiate contamination of the urine from infection by quantifying the number of bacteria present in a mine sample. This criterion is based on a properly collected midstream clean-catch urine specimen. Patients with infection usually have greater than 10 bacte-ria/mL of urine. It should be emphasized that as many as one-third of women with symptomatic infection have less than 10 bacteria/mL. A significant portion of patients with UTIs, either symptomatic or asymptomatic, also have less than 10 bacteria/mL of urine. [Pg.2085]

However, in practice, even if the structural condition is not changed, the damage feature Dp can differ from zero due to the nonlinear and time-variant behavior of a bridge, the measurement noise as well as the errors introduced by the AR and ARX processes. Therefore, damage diagnosis is performed in a statistical way on the basis of a large number of data samples. The criterion to evaluate the probability distribution of Dp for the pairs U and D is the comparison with the distribution function for U and N. [Pg.209]

The acceptance criterion for recovery data is 98-102% or 95-105% for drug preparations. In biological samples, the recovery should be 10%, and the range of the investigated concentrations is 20% of the target concentrations. For trace level analysis, the acceptance criteria are 70-120% (for below 1 ppm), 80-120% (for above 100 ppb), and 60-100% (for below 100 ppb) [2]. For impurities, the acceptance criteria are 20% (for impurity levels <0.5%) and 10% (for impurity levels >0.5%) [30], The AOAC (cited in Ref. [11]) described the recovery acceptance criteria at different concentrations, as detailed in Table 2. A statistically valid test, such as a /-test, the Doerffel-test, or the Wilcoxon-test, can be used to prove whether there is no significant difference between the result of accuracy study with the true value [29],... [Pg.252]

The criterion of mean-unbiasedness seems to be occasionally overemphasized. For example, the bias of an MLE may be mentioned in such a way as to suggest that it is an important drawback, without mention of other statistical performance criteria. Particularly for small samples, precision may be a more important consideration than bias, for purposes of an estimate that is likely to be close to the true value. It can happen that an attempt to correct bias results in lowered precision. An insistence that all estimators be UB would conflict with another valuable criterion, namely parameter invariance (Casella and Berger 1990). Consider the estimation of variance. As remarked in Sokal and Rohlf (1995), the familiar sample variance (usually denoted i ) is UB for the population variance (a ). However, the sample standard deviation (s = l is not UB for the corresponding parameter o. That unbiasedness cannot be eliminated for all transformations of a parameter simply results from the fact that the mean of a nonlinearly transformed variable does not generally equal the result of applying the transformation to the mean of the original variable. It seems that it would rarely be reasonable to argue that bias is important in one scale, and unimportant in any other scale. [Pg.38]

The statistical prediction errors for the unknowns are compared to the maximum statistical prediction error found from model validation in order to assess the reliability of the prediction. Prediction samples which have statistical prediction errors that are significantly larger than this criterion are investigated funher. In the model validation, the maximum error observed for component A is 0.025 (Figure 5-1 In) and 0.019 for component B (Figure 5.11b). For unknown 1, the statistical prediction errors are within this range. For the other unknowns, the statistical prediction errors are much larger. Therefore, the predicted concentrations should not be considered valid. [Pg.287]

The basic criterion for successful validation was that a method should come within 25% of the "true value" at the 95% confidence level. To meet this criterion, the protocol for experimental testing and method validation was established with a firm statistical basis. A statistical protocol provided methods of data analysis that allowed the accuracy criterion to be evaluated with statistical parameters estimated from the laboratory test data. It also gave a means to evaluate precision and bias, independently and in combination, to determine the accuracy of sampling and analytical methods. The substances studied in the second phase of the study are summarized in Table I. [Pg.5]

For our validations, a CVp is a pooled estimate calculated from the particular type of statistical data set (36 samples) described earlier in the Statistical Experimental Design section of this report. A statistical procedure is given in Hald JL for determining an upper confidence limit for the coefficient of variation. This general theory had o be adapted appropriately for application to a pooled CVp estimate. For this design, and under the stated assumptions, there is a one-to-one correspondence between values of CVp and upper confidence limits for CVp. Therefore, the confidence limit criterion given above is equivalent to another criterion based on the relationship of CVp and its critical value. The... [Pg.508]

We have presented a statistical experimental design and a protocol to use in evaluating laboratory data to determine whether the sampling and analytical method tested meets a defined accuracy criterion. The accuracy is defined relative to a single measurement from the test method rather than for a mean of several replicate test results. Accuracy here is the difference between the test result and the "true value, and thus, must combine the two sources of measurement error ... [Pg.512]

Note that when more than 85% of the drug is dissolved from both products within 15 minutes, dissolution profiles may be accepted as similar without further mathematical evaluation. For the sake of completeness, one should add that some concerns have been raised regarding the assessment of similarity using the direct comparison of the fi and /2 point estimates with the similarity limits [140-142], Attempts have been made to bring the use of the similarity factor /2 as a criterion for assessment of similarity between dissolution profiles in a statistical context using a bootstrap method [141] since its sampling distribution is unknown. [Pg.112]

In Sections 2 to 4, we review the technology of synthetic oligonucleotide microarrays and describe some of the popular statistical methods that are used to discover genes with differential expression in simple comparative experiments. A novel Bayesian procedure is introduced in Section 5 to analyze differential expression that addresses some of the limitations of current procedures. We proceed, in Section 6, by discussing the issue of sample size and describe two approaches to sample size determination in screening experiments with microarrays. The first approach is based on the concept of reproducibility, and the second approach uses a Bayesian decision-theoretic criterion to trade off information gain and experimental costs. We conclude, in Section 7, with a discussion of some of the open problems in the design and analysis of microarray experiments that need further research. [Pg.116]

Statistical analysis, as applied to production or other processes in which quantities of materials are continuously being tested or measured, is known as quality control. In this statistical method, some measurable attribute of the processed material is used as a criterion of the quality of the product. Random samples are drawn from the production line in succeeding time intervals, and the means of small groups of these samples are compared with some standard. Statistical methods, particularly the t-test, provide a method of determining when the measured mean differs from the control value by an amount greater than would be expected by chance. [Pg.772]

Once a suitable HPLC separation mode has been selected, the experimental conditions should be adjusted to suit the objective of the separation. To proceed in this way, either empirical or systematic (statistical or predictive) HPLC method development strategies can be used. Any method development necessitates a convenient measure of the quality of separation. The separation of two sample compounds is most often measured either by resolution or by peak separation function (see Section 1.1.2). The resolution is an especially useful criterion of separation as its definition Eq. (1.3) can be transformed to another expression relating directly to the experimental conditions of separation v/A k... [Pg.53]


See other pages where Sampling statistical criterion is mentioned: [Pg.184]    [Pg.515]    [Pg.72]    [Pg.127]    [Pg.187]    [Pg.108]    [Pg.454]    [Pg.752]    [Pg.97]    [Pg.91]    [Pg.255]    [Pg.97]    [Pg.269]    [Pg.232]    [Pg.170]    [Pg.193]    [Pg.49]    [Pg.38]    [Pg.203]    [Pg.124]    [Pg.109]    [Pg.13]    [Pg.27]    [Pg.379]    [Pg.409]    [Pg.278]    [Pg.542]    [Pg.115]    [Pg.615]    [Pg.119]    [Pg.120]    [Pg.124]    [Pg.402]    [Pg.904]    [Pg.633]    [Pg.155]    [Pg.253]   
See also in sourсe #XX -- [ Pg.566 ]




SEARCH



Sample statistic

Samples statistic sample

Statistical sampling

© 2024 chempedia.info