Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data quality indicators

To encompass the seemingly incompatible qualitative and quantitative components of total error, we evaluate them under the umbrella of so-called data quality indicators (DQIs). DQIs are a group of quantitative and qualitative descriptors, namely precision, accuracy, representativeness, comparability, and completeness, summarily referred to as the PARCC parameters, used in interpreting the degree of acceptability or usability of data (EPA, 1997a). As descriptors of the overall environmental measurement system, which includes field and laboratory measurements and processes, the PARCC parameters enable us to determine the validity of the collected data. [Pg.8]

The quality of analytical data is assessed in terms of qualitative and quantitative data quality indicators, which are precision, accuracy, representativeness, comparability, and completeness or the PARCC parameters. The PARCC parameters are the principal DQIs the secondary DQIs are sensitivity, recovery, memory effects, limit of quantitation, repeatability, and reproducibility (EPA, 1998a). [Pg.38]

Table 2.2 Quantitative evaluation of data quality indicators... Table 2.2 Quantitative evaluation of data quality indicators...
The MDL is one of the secondary data quality indicators. The EPA provides the definition of the MDL as the minimum concentration that can be measured and reported with 99 percent confidence that the analyte concentration is greater than zero (EPA, 1984a). [Pg.241]

The terms estimated quantitation limit (EQL) and practical quantitation limit describe the limit of quantitation, another secondary data quality indicator. These terms are used interchangeably. In fact, the common term used by the laboratories is the PQL. The EPA, however, prefers to use the term EQL and defines it as follows The estimated quantitation limit EQL) is the lowest concentration that can be reliably achieved within specified limits of precision and accuracy during routine laboratory operating conditions (EPA, 1996a). The PQL is defined similarly (EPA, 1985). [Pg.241]

Acceptance criteria for data quality indicators have been met... [Pg.270]

DOD DOT DQA DQAR DQIs DQOs DRO Department of Defense Department of Transportation Data Quality Assessment Data Quality Assessment Report Data Quality Indicators Data Quality Objectives diesel range organics... [Pg.348]

The question of data quality has not been solved, either by government or industry. CMA has recognized the importance of developing data quality indicators and has begun a cooperative program with government, academe and industry to accomplish this goal. [Pg.58]

These procedures were additions to the previously developed testing procedure used. The results from tests 3 and 4 indicated that the three methods were equivalent in data quality, indicating that the precautions being taken in the field were adequate to prevent chemical degradation prior to and during field laboratory measurements. [Pg.215]

These qualities are considered in most applications for transparent plastics, forming a basis for directly comparing the transparency of various grades and types of plastic. The data are of value when a material is considered for optical purposes. Many transparent plastics do not have water clarity, and, for this reason, the data should indicate whether... [Pg.329]

British panel. Note that the sensory attributes are to some extent different. Table 35.3 gives some information on the country of origin and the state of ripeness of the olives. Finally, Table 35.4 gives some physico-chemical data on the same samples that are related to the quality indices of olive oils acid and peroxide level, UV absorbance at 232 nm and 270 nm, and the difference in absorbance at wavelength 270 nm and the average absorbance at 266 nm and 274 nm. [Pg.308]

The developments in the treatment of RA are tempered by the lack of evidence describing the long-term safety and efficacy of the BRMs. In addition, the cost associated with the medications can be a deterrent to use. Long-term data are needed to determine if patients receiving BRM therapy early in the course of disease have reduced disease activity, reduced joint deformities and disability, improved quality of life, and continued function as productive members of society. Cost analyses of long-term data may indicate that the increased expenses associated with BRMs are offset by the costs avoided for the treatment of advanced RA. [Pg.875]

Another external response to concerns about MCOs has been an increased interest in measuring the quality of care they deliver [35]. This interest has resulted in the development of numerous quality indicators. One example, HEDIS (Health Plan Employer Data and Information Set), is a standardized set of performance indicators used to compare health plans. Developed by the National Committee for Quality Assurance, HEDIS measures allow employers and employees to evaluate different plans. Only a small number of HEDIS indicators are related to medication use, but more drug-related indicators are likely to be added in the future. The use of quality indicators likely will increase as the measures become more refined and tested. [Pg.805]

Another indication that the use of reference materials has improved oceanographic data quality can be seen by examining the degree of agreement between measurements for deep water masses obtained where two separate cruises intersect. Lamb et al. (2002) examined this in detail for cruises in the Pacific Ocean and showed that the measurements of total DIC (for cruises where reference materials were available) typically agreed to within 2 pmol/kg (Fig. 2.3). This is in sharp contrast to the required adjustments to previous oceanic carbon data sets over the years. [Pg.41]

TNO has stated that the size, quality, completeness, and consistency of the database should be considered (Hakkert et al. 1996). Major aspects for the evaluation of the quality of the data supporting the NOAEL are (1) deviations from official guidelines, which are not properly substantiated, (2) number of animals used, (3) number of dose levels tested, and (4) adequacy of hematological, biochemical, and pathological examinations. Indications for doubts on the confidence in the database are (1) the absence of certain types of smdies, (2) conflicting results between studies, and (3) doubts on the reliability of the route-to-route extrapolation. However, consistency of results from different studies, consistency of animal and human data, and rehable mechanistic data are indicative for a high-confidence database. The default assessment factor for confidence of the database is 1. [Pg.286]

Given the data challenges discussed previously and the increasing use of streamlined methods, it is necessary to continuously improve the consistency and transparency of the information and the assumptions used in such tools to ensure the quality and the validity of the decisions made with the aid of LGA metrics. The inclusion of quality indicators (such as sensitivity and uncertainty analysis) will continue to be an important step to estimate the uncertainties involved in the inventory and impact models. Finally, there is a need to continuously perform peer review assessments by LGA experts, as the current LGA expertise in pharmaceuticals is very limited. When these requirements are fulfilled, LGA metrics are powerful tools to aid the decision making leading to more sustainable pharmaceutical processes. For further examples of FLASG scores and other LGA analyzes being applied, see Section 10.4.1. [Pg.34]

Laboratories using these methods for regulatory purposes are required to operate a formal quality control program. The minimum requirements of the program consist of an initial demonstration of laboratory capability and an ongoing analysis of spiked samples to evaluate and document data quality. The laboratory must maintain records to document the quality of data that is generated. Ongoing data quality checks are compared with established performance criteria to determine whether or not the results of analyses meet the demonstrated performance characteristics of the method. When results of spike sample analyses indicate atypical method performance, a quality control check standard must be analyzed to confirm that the measurements were performed in an in-control mode of operation. [Pg.86]

This unit defines three different tests that are used to evaluate lipid systems. The first two, i.e., iodine value (IV see Basic Protocol I) and saponification value (SV see Basic Protocol 2), are used to determine the level of unsaturation and the relative size (chain length) of the fatty acids in the system, respectively. The free fatty acid (FFA) analysis (see Basic Protocol 3) is self-explanatory. Each of these analyses provides a specific set of information about the lipid system. The IV and SV provide relative information this means that the data obtained are compared to the same data from other, defined lipid systems. In mixed triacylglyceride systems there is no absolute IV that indicates the exact number of double bonds or SV that indicates the exact chain length. The data from the FFA analysis is an absolute value however, the meaning of the value is not absolute. As a quality indicator, ranges of FFA content are used and the amount that can be tolerated is product and/or process dependent. [Pg.467]

One of the stations, Zeppelin (ZEP), is located far north of the European mainland, on the Svalbard archipelago, 78°N. This far northern position creates many environmental drivers for the aerosol size distribution, almost never seen at the more southern stations. As the station is located north of the northern polar circle, the station is good part of the year in complete daylight (midnight sun), and in complete darkness (polar night). Although the data quality was not always optimal, some indication of the aerosol number size distributions can be made. [Pg.310]

What is the importance of the null and the alternative hypotheses They enable us to link the baseline and alternative condition statements to statistical testing and to numerically expressed probabilities. The application of a statistical test to the sample data during data quality assessment will enable us to decide with a chosen level of confidence whether the true mean concentration is above or below the action level. If a statistical test indicates that the null hypothesis is not overwhelmingly supported by the sample data with the chosen level of confidence, we will reject it and accept the alternative hypothesis as a true one. In this manner we will make a choice between the baseline and the alternative condition. [Pg.26]

A host of laboratory QC checks are encoded into data qualifier conventions designed to indicate specific QC deficiencies. Different laboratories may have different letters for identifying the same non-compliant events. The meaning of the data qualifier must be explicitly annotated in the reported data. During data evaluation in the assessment phase of the data collection process, the data user will determine the effect of these deficiencies on data quality and usability. [Pg.207]


See other pages where Data quality indicators is mentioned: [Pg.99]    [Pg.8]    [Pg.30]    [Pg.38]    [Pg.46]    [Pg.99]    [Pg.8]    [Pg.30]    [Pg.38]    [Pg.46]    [Pg.245]    [Pg.561]    [Pg.564]    [Pg.112]    [Pg.166]    [Pg.399]    [Pg.4]    [Pg.122]    [Pg.310]    [Pg.61]    [Pg.4]    [Pg.128]    [Pg.65]    [Pg.205]    [Pg.23]    [Pg.116]    [Pg.36]    [Pg.138]    [Pg.566]    [Pg.389]    [Pg.314]    [Pg.202]   
See also in sourсe #XX -- [ Pg.8 , Pg.38 ]




SEARCH



Data quality

Quality indicators

© 2024 chempedia.info