Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Non-sampling errors

Non-sampling errors can be categorized into laboratory error and data management error, with laboratory error further subdivided into measurement, data interpretation, sample management, laboratory procedure and methodology errors. [Pg.7]

Clearly, every project team should be concerned with the effect of total error on data relevancy and validity and should make every effort to minimize it. Nevertheless, the cumulative effect of various sampling and non-sampling errors may erode the data validity or relevancy to a point that the data set becomes unusable for project... [Pg.7]

Why do we need to establish the data quality Why cannot we simply trust the field sampling crew and the laboratory to produce data of required quality according to the SAP s specifications Unfortunately, the world of field and laboratory operations is not ideal, and errors may go unnoticed and uncorrected. As various sampling and non-sampling errors erode the collected data quality, the quantity of usable data is... [Pg.266]

If a data set is insufficiently complete due to various sampling and non-sampling errors, the test may show that more samples would be needed in order to achieve the project objectives with a stated level of confidence. In this case, additional samples may be collected to fill the data gaps. [Pg.294]

Military and ammunition sites contaminated with explosives can cover substantial areas (Gerth et al. 2005). Soil contamination in these sites is often heterogeneous. Explosives are relatively non-volatile, and have low aqueous solubility. Sampling from sites within a few decimetres of one another can result in concentration differences of up to one hundredfold (Jenkins et al. 1996). For example, the coefficients of variation across samples taken from 11 abandoned sites in the USA were 248% for TNT and 137% for Hexogen (Crockett et al. 1998). As a result, sampling error greatly exceeds measurement error. Thus to obtain representative results... [Pg.45]

Figure 8. Comparison scores for iris data in NASK BioBase. 180 (N) genuine and 32220 (N(N-l)) comparisons impostor transactions were used. The average genuine transaction score 0.22, the average impostor transaction score 0.48, the minimal impostor transaction score 0.37, maximal genuine transaction score 0.32, Threshold = 0.35 results in no sample false match and no sample false non-match errors. Figure 8. Comparison scores for iris data in NASK BioBase. 180 (N) genuine and 32220 (N(N-l)) comparisons impostor transactions were used. The average genuine transaction score 0.22, the average impostor transaction score 0.48, the minimal impostor transaction score 0.37, maximal genuine transaction score 0.32, Threshold = 0.35 results in no sample false match and no sample false non-match errors.
In the paper we described a new method of iris texture coding and feature extraction, together with its application to remote access security. We introduced the iris coding method based on Zak-Gabor coefficients sequence and the methodology of the optimal iris features selection. The proposed approach leads to zero false match and zero false non-match sample errors. We also showed how these ideas can be implemented in a secure remote access framework. [Pg.276]

Probability distribution models can be used to represent frequency distributions of variability or uncertainty distributions. When the data set represents variability for a model parameter, there can be uncertainty in any non-parametric statistic associated with the empirical data. For situations in which the data are a random, representative sample from an unbiased measurement or estimation technique, the uncertainty in a statistic could arise because of random sampling error (and thus be dependent on factors such as the sample size and range of variability within the data) and random measurement or estimation errors. The observed data can be corrected to remove the effect of known random measurement error to produce an error-free data set (Zheng Frey, 2005). [Pg.27]

We have also pursued electrochemically back-plating of the copper sample to reduce the copper ion concentration and leave in solution impurities such as thorium and uranium, which should not plate out at the half-cell potential of copper. Theoretically, the amount of sample that can be processed in this manner is not limited. All materials including any non-sample electrodes must not add contamination and must be of extreme purity. Also, the amount of copper remaining in solution must be back-plated to <10 (xg/ml, and if a sulfate system is used, which is useful in support of further developing the predictive rejection rate information, then the sulfate ion should be <10 mmol as well. This approach hinges on the rejection rate remaining sufficiently high as to not introduce an undue amount of error. We have measured rejection rates as low as 10 but even at 10 this would only represent a 1% error in the assay result. [Pg.160]

Non-counting errors generally ignored, especially sampling... [Pg.180]

Micrometer measurements of thickness were made on the solidified PE samples. Errors due to polymer contraction on solidification were small, as the process of solidification generally results in a net volume change of the solid in the absence of constraints. As the polymer samples were not constrained in any dimension, contraction occurred along the length and width of the specimen as well as the thickness. That portion of the contraction resulting in a decrease in sample thickness was observed to be non-uniform across the face of the sample micrometer measurements on this face were taken as true melt thickness. [Pg.14]

Repeated measures ANOVA carried out on the data revealed that differences between the conditions were unlikely to have arisen due to sampling error (Fj = 8.39, p = 0.004). An overall effect size of 0.48 (Partial Eta squared) showed that almost 50 per cent of the variation in error scores can be accounted for by differing fatigue levels. Post-hoc comparisons revealed that mean reaction time was significantly slower before the non-rested simulation than at baseline M =QAA, f = 2.53, p = 0.032). [Pg.306]

Analytical procedures may be either destructive or non-destructive. Indirect or destructive methods require a significant alteration to the sample so that the additives can be removed from the plastic material for subsequent detection. Direct or nondestructive methods involve minimal sample preparation which greatly speeds up the analytical procedure. Quantitative analytical procedures are either continuous or discontinuous. In non-continuous routine analytical procedures a smaU amount of substance is analysed which is considered to be representative of the whole sample. Errors are introduced by the sampling procedures and no measure of continuous changes in the composition of the material is obtained. Continuous monitoring needs more resources but can give a better and more extensive measure of the composition of the material and variations in time (c/r Chp. 7). [Pg.598]

One should experiment with the various options available for reducing sampling errors at endpoints where atoms are being introduced or removed from the system. A variety of choices exist for this (bond shrinking, alternative k scaling, a shift (unction in the non-bonded potential, etc.). Some combination of these choices will probably allow more efficient calculation of the free energy. [Pg.1053]


See other pages where Non-sampling errors is mentioned: [Pg.5]    [Pg.7]    [Pg.5]    [Pg.7]    [Pg.163]    [Pg.168]    [Pg.183]    [Pg.158]    [Pg.188]    [Pg.37]    [Pg.185]    [Pg.397]    [Pg.225]    [Pg.49]    [Pg.158]    [Pg.760]    [Pg.165]    [Pg.458]    [Pg.397]    [Pg.1279]    [Pg.159]    [Pg.140]    [Pg.41]    [Pg.251]    [Pg.531]    [Pg.301]    [Pg.206]    [Pg.357]    [Pg.531]    [Pg.117]    [Pg.402]    [Pg.160]    [Pg.121]    [Pg.249]    [Pg.42]   
See also in sourсe #XX -- [ Pg.5 , Pg.7 ]




SEARCH



Error sampling

Error, sample

© 2024 chempedia.info