Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistics measurement bias

Statistical Control. Statistical quahty control (SQC) is the apphcation of statistical techniques to analytical data. Statistical process control (SPC) is the real-time apphcation of statistics to process or equipment performance. Apphed to QC lab instmmentation or methods, SPC can demonstrate the stabihty and precision of the measurement technique. The SQC of lot data can be used to show the stabihty of the production process. Without such evidence of statistical control, the quahty of the lab data is unknown and can result in production challenging adverse test results. Also, without control, measurement bias cannot be determined and the results derived from different labs cannot be compared (27). [Pg.367]

The result from a measurement on a RM is commonly a difference between the observed value and the certified value. This difference is called measurement bias, and can, appreciating both the uncertainty on the RM as well as the imcertainty added during the measurement, be tested for (statistical) significance. ISO Guide 33... [Pg.9]

Where there is a statistically significant bias, routine measurement results can be corrected as shown in the following equation ... [Pg.85]

The evaluation of reproducibility results often focuses more on measuring bias in results than on determining differences in precision alone. Statistical equivalence is often used as a measure of acceptable interlaboratory results. An example of reproducibility criteria for an assay method could be that the assay results obtained in multiple laboratories will be statistically equivalent or the mean results will be within 2% of the value obtained by the primary testing laboratory. [Pg.753]

Reactivity worths of a range of sample sizes and comMsitions were measured in this series of criticals. For Pu, the statistically significant bias in the calculated value is 430%, independrat of sample size in the range of a few grams to as much as 32 kg. For U, the ratio of calculated-to-e erimental value is 1.S for small aaimples. Analyses of a series of sodium void experiments performed in ZPR-m, Assembly 51, yield ratios of calculated-to-experimehtal values for the reactivity effect of voiding over a and full core that are 0.95. [Pg.273]

The fuel centerline temperature data in LOFT large break experiments LP-02-6 and LP-LB-1 were analyzed to determine the bias at peak cladding temperature (PCT) in the cladding exterior surface-mounted thermocouples and the effect of the thermocouple cable on the thermal behavior of the cladding. A statistically determined bias of 11.4 K 16.2 K was found in the cladding thermocouples (measured less than actual PCT). The fin effect of the thermocouple cable was determined to be small and within the uncertainty of the data in the blowdown phase of the transients in which PCT occurred. The PCT in LOFT experiments LP-02-6 and LP-LB-1 was determined to be 1104.8 K and 1284.0 K respectively. [Pg.445]

There are two types of measurement errors, systematic and random. The former are due to an inherent bias in the measurement procedure, resulting in a consistent deviation of the experimental measurement from its true value. An experimenter s skill and experience provide the only means of consistently detecting and avoiding systematic errors. By contrast, random or statistical errors are assumed to result from a large number of small disturbances. Such errors tend to have simple distributions subject to statistical characterization. [Pg.96]

Analytical chemists make a distinction between error and uncertainty Error is the difference between a single measurement or result and its true value. In other words, error is a measure of bias. As discussed earlier, error can be divided into determinate and indeterminate sources. Although we can correct for determinate error, the indeterminate portion of the error remains. Statistical significance testing, which is discussed later in this chapter, provides a way to determine whether a bias resulting from determinate error might be present. [Pg.64]

In a performance-based approach to quality assurance, a laboratory is free to use its experience to determine the best way to gather and monitor quality assessment data. The quality assessment methods remain the same (duplicate samples, blanks, standards, and spike recoveries) since they provide the necessary information about precision and bias. What the laboratory can control, however, is the frequency with which quality assessment samples are analyzed, and the conditions indicating when an analytical system is no longer in a state of statistical control. Furthermore, a performance-based approach to quality assessment allows a laboratory to determine if an analytical system is in danger of drifting out of statistical control. Corrective measures are then taken before further problems develop. [Pg.714]

Preferably the transferring lab provides a sample which has already been analyzed, with the certainty of the results being known (41). This can be either a reference sample or a sample spiked to simulate the analyte. An alternative approach is to compare the test results with those made using a technique of known accuracy. Measurements of the sample are made at the extremes of the method as well as the midpoint. The cause of any observed bias, the statistical difference between the known sample value and the measured value, should be determined and eliminated (42). When properly transferred, the method allows for statistical comparison of the results between the labs to confirm the success of the transfer. [Pg.369]

Rectification accounts for systematic measurement error. During rectification, measurements that are systematically in error are identified and discarded. Rectification can be done either cyclically or simultaneously with reconciliation, and either intuitively or algorithmically. Simple methods such as data validation and complicated methods using various statistical tests can be used to identify the presence of large systematic (gross) errors in the measurements. Coupled with successive elimination and addition, the measurements with the errors can be identified and discarded. No method is completely reliable. Plant-performance analysts must recognize that rectification is approximate, at best. Frequently, systematic errors go unnoticed, and some bias is likely in the adjusted measurements. [Pg.2549]

The above assumes that the measurement statistics are known. This is rarely the case. Typically a normal distribution is assumed for the plant and the measurements. Since these distributions are used in the analysis of the data, an incorrect assumption will lead to further bias in the resultant troubleshooting, model, and parameter estimation conclusions. [Pg.2561]

This is a formidable analysis problem. The number and impact of uncertainties makes normal pant-performance analysis difficult. Despite their limitations, however, the measurements must be used to understand the internal process. The measurements have hmited quahty, and they are sparse, suboptimal, and biased. The statistical distributions are unknown. Treatment methods may add bias to the conclusions. The result is the potential for many interpretations to describe the measurements equaUv well. [Pg.2562]

It is often assumed that the measurements taken with a calibrated device are accurate, and indeed they are if we take account of the variation that is present in every measuring system and bring the system under statistical control. Variation in measurement systems arises due to bias, repeatability, reproducibility, stability, and linearity. [Pg.408]

Definition and Uses of Standards. In the context of this paper, the term "standard" denotes a well-characterized material for which a physical parameter or concentration of chemical constituent has been determined with a known precision and accuracy. These standards can be used to check or determine (a) instrumental parameters such as wavelength accuracy, detection-system spectral responsivity, and stability (b) the instrument response to specific fluorescent species and (c) the accuracy of measurements made by specific Instruments or measurement procedures (assess whether the analytical measurement process is in statistical control and whether it exhibits bias). Once the luminescence instrumentation has been calibrated, it can be used to measure the luminescence characteristics of chemical systems, including corrected excitation and emission spectra, quantum yields, decay times, emission anisotropies, energy transfer, and, with appropriate standards, the concentrations of chemical constituents in complex S2unples. [Pg.99]

Results from the analysis of the RM and the certified value and their uncertainties are compared using simple statistical tests (Ihnat 1993,1998a). If the measured concentration value agrees with the certified value, the analyst can deduce with some confidence that the method is applicable to the analysis of materials of similar composition. If there is disagreement, the method as applied exhibits a bias and underlying causes of error should be sought and corrected, or their effects minimized. [Pg.217]

The objective of sediment and water sampling is to obtain reliable information about the behavior of agrochemicals applied to paddy fields. Errors or variability of results can occur randomly or be due to bias. The two major sources of variability are sediment body or water body variability and measurement variability . For the former, a statistical approach is required the latter can be divided into sampling variability, handling, shipping and preparation variability, subsampling variability, laboratory analysis variability, and between-batch variability. ... [Pg.906]

The section following shows a statistical test (text for the Comp Meth MathCad Worksheet) for the efficient comparison of two analytical methods. This test requires that replicate measurements be made on two different samples using two different analytical methods. The test will determine whether there is a significant difference in the precision and accuracy for the two methods. It will also determine whether there is significant systematic error between the methods, and calculate the magnitude of that error (as bias). [Pg.187]

A sample is selected by a random process to eliminate problems of bias in selection and/or to provide a basis for statistical interpretation of measurement data. There are three sampling processes which give rise to different types of random sample ... [Pg.30]

Precision estimates are key method performance parameters and are also required in order to carry out other aspects of method validation, such as bias and ruggedness studies. Precision is also a component of measurement uncertainty, as detailed in Chapter 6. The statistics that are applied refer to random variation and therefore it is important that the measurements are made to comply with this requirement, e.g. if change of precision with concentration is being investigated, the samples should be measured in a random order. [Pg.82]

The behavior of the detection algorithm is illustrated by adding a bias to some of the measurements. Curves A, B, C, and D of Fig. 3 illustrate the absolute values of the innovation sequences, showing the simulated error at different times and for different measurements. These errors can be easily recognized in curve E when the chi-square test is applied to the whole innovation vector (n = 4 and a = 0.01). Finally, curves F,G,H, and I display the ratio between the critical value of the test statistic, r, and the chi-value that arises from the source when the variance of the ith innovation (suspected to be at fault) has been substantially increased. This ratio, which is approximately equal to 1 under no-fault conditions, rises sharply when the discarded innovation is the one at fault. [Pg.166]

The bias is then subtracted from the appropriate measurement, the reconciliation repeated, and the statistic compared with that of the base case. If the comparison is favorable, then the correct variable has been identified to contain a bias. If not, the procedure is repeated for a new suspected measurement, until the correct one is identified. [Pg.174]

Each pi statistic is tested against the threshold value of 1.96, which corresponds to a 95% confidence level. The last principal component results in suspect. The contributions from each residual of the constraints to the principal components are given in Fig. 18. From this figure we see that the residuals of units 1 and 2 are the main contributions to all the principal components and in particular to p4. The flowrates involved in units 1 and 2 are fa, fa, fa, fa, fa- Because fa and ft, are related to unit 3, which is not suspect, and fa participates in the unsuspected unit 4, we can conclude that the only bias measurements are fa, fa, as was simulated. [Pg.242]

If a large number of readings of the same quantity are taken, then the mean (average) value is likely to be close to the true value if there is no systematic bias (i.e., no systematic errors). Clearly, if we repeat a particular measurement several times, the random error associated with each measurement will mean that the value is sometimes above and sometimes below the true result, in a random way. Thus, these errors will cancel out, and the average or mean value should be a better estimate of the true value than is any single result. However, we still need to know how good an estimate our mean value is of the true result. Statistical methods lead to the concept of standard error (or standard deviation) around the mean value. [Pg.310]

There are three assumptions about sampling which are common to most of the statistical analysis techniques that are used in toxicology. These are that the sample is collected without bias, that each member of a sample is collected independently of the others and that members of a sample are collected with replacements. Precluding bias, both intentional and unintentional, means that at the time of selection of a sample to measure, each portion of the population from which that selection is to be made has an equal chance of being selected. Ways of precluding bias are discussed in detail in the section on experimental design. [Pg.874]

II the difference approach, which typically utilises 2-sided statistical tests (Hartmann et al., 1998), using either the null hypothesis (H0) or the alternative hypothesis (Hi). The evaluation of the method s bias (trueness) is determined by assessing the 95% confidence intervals (Cl) of the overall average bias compared to the 0% relative bias value (or 100% recovery). If the Cl brackets the 0% bias then the trueness that the method generates acceptable data is accepted, otherwise it is rejected. For precision measurements, if the Cl brackets the maximum RSDp at each concentration level of the validation standards then the method is acceptable. Typically, RSDn> is set at <3% (Bouabidi et al., 2010),... [Pg.28]

III the equivalence approach, which typically compares a statistical parameters confidence interval versus pre-defined acceptance limits (Schuirmann, 1987 Hartmann et al., 1995 Kringle et al., 2001 Hartmann et al., 1994). This approach assesses whether the true value of the parameter(s) are included in their respective acceptance limits, at each concentration level of the validation standards. The 90% 2-sided Cl of the relative bias is determined at each concentration level and compared to the 2% acceptance limits. For precision measurements, if the upper limit of the 95% Cl of the RSDn> is <3% then the method is acceptable (Bouabidi et al., 2010) or,... [Pg.28]


See other pages where Statistics measurement bias is mentioned: [Pg.2019]    [Pg.32]    [Pg.169]    [Pg.327]    [Pg.232]    [Pg.235]    [Pg.1777]    [Pg.192]    [Pg.2187]    [Pg.2171]    [Pg.2023]    [Pg.52]    [Pg.234]    [Pg.14]    [Pg.394]    [Pg.47]    [Pg.177]    [Pg.41]    [Pg.115]    [Pg.158]    [Pg.928]    [Pg.118]    [Pg.243]   
See also in sourсe #XX -- [ Pg.4 , Pg.363 , Pg.381 ]




SEARCH



Biases

Measurement bias

Statistical measure

Statistics measures

© 2024 chempedia.info