Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data accuracy

Measurement Selection The identification of which measurements to make is an often overlooked aspect of plant-performance analysis. The end use of the data interpretation must be understood (i.e., the purpose for which the data, the parameters, or the resultant model will be used). For example, building a mathematical model of the process to explore other regions of operation is an end use. Another is to use the data to troubleshoot an operating problem. The level of data accuracy, the amount of data, and the sophistication of the interpretation depends upon the accuracy with which the result of the analysis needs to oe known. Daily measurements to a great extent and special plant measurements to a lesser extent are rarelv planned with the end use in mind. The result is typically too little data of too low accuracy or an inordinate amount with the resultant misuse in resources. [Pg.2560]

LG Boulu, GM Crippen. Voronoi binding site models Calculation of binding models and influence of drag binding data accuracy. I Comput Chem 10(5) 673-682, 1989. [Pg.367]

Accuracy of data The microprocessor should be capable of automatically acquiring accurate, repeatable data from equipment included in the program. The elimination of user input on filter settings, bandwidths and other measurement parameters would greatly improve the accuracy of acquired data. The specific requirements that determine data accuracy will vary depending on the type of data. For example, a vibration instrument should be able to average... [Pg.806]

Up to now (1971) only a limited number of reaction series have been completely worked out in our laboratories along the lines outlined in Sec. IV. In fact, there are rather few examples in the literature with a sufficient number of data, accuracy, and temperature range to be worth a thorough statistical treatment. Hence, the examples collected in Table III are mostly from recent experimental work and the previous ones (1) have been reexamined. When evaluating the results, the main attention should be paid to the question as to whether or not the isokinetic relationship holds i.e., to the comparison of standard deviations of So and Sqo The isokinetic temperature /J is viewed as a mere formal quantity and is given no confidence interval. Comparison with previous treatments is mostly restricted to this value, which has generally and improperly been given too much atention. [Pg.476]

Prospective sources include encounter data, which may or may not be contained in EHRs patient data input and randomized, prospective clinical trials. Advantages of prospective sources to inform interactive software include the ability to control and monitor the circumstances of data collection reduction (as a result of randomization) of sources of bias potential minimization of missing data potential to modify design of data collection ability to verify data accuracy and ability to validate and further test assumptions and modify existing programs. [Pg.581]

First, with respect to the type of basis functions used in G, smoothness is by no means restrictive. As it is intuitively clear and proved in practice, weird nonsmooth basis functions have to be excluded from consideration but beyond that, all normal bases are able to create smooth approximations of the available data. Accuracy is not a constraint either. Given enough basis functions, arbitrary accuracy for the prediction on the data is possible. [Pg.167]

CRMs were inserted into the approximately 3000 samples in the 76-element geochemical mapping pilot project in southwestern China. The RSD% of these determinations and the certified values are listed in Table 3. All the RSD are less than 25% and the analytical data are considered accurate. Data accuracy had also been subsequently justified from the mapping results. [Pg.437]

All current SOPs should be available in the work area in which they are used. Each person who may need specific SOPs for his/her work should also have them, perhaps in a file near his/her desk. In addition, there should be a location in which master SOPs for all activities are filed and all SOPs should also be archived so that past revisions are accessible. All obsolete SOPs, however, should be removed and filed away from the work area and clearly identified as obsolete. The decision to revise an SOP must be based on sound observations and protocols that point to improved data accuracy and integrity. Such decisions can be based on a new procedure, a new piece of equipment, etc. SOPs are dynamic documents and should be considered for revision on a regular basis with input from the technicians and scientists doing the work. [Pg.31]

The majority of extreme data were received from the universities. It was evident that the switch-on, inject, switch-off approach was applied (likely by the students) without critical evaluation of data produced by the computers software. The least scattered M values were obtained in the industrial laboratories, in which evidently the skilled operators performed the measurements. Better data accuracy was obtained for polyamides [154] and for oligomeric polyepoxides [155] than for the unproblematic poly(dimethyl siloxane)s and even for the most simple polymer, polystyrene likely because only experts measured the latter difficult samples. [Pg.476]

In general, the data accuracy was surprisingly good. For example, while Deaton and Frost (1946, p. 13) specified that their pure ethane contained 2.1% propane and 0.8% methane, effects of those impurities may have counterbalanced each other those impurities were insufficient to cause the data to fall outside the line formed by other ethane data sets. On the other hand, the simple hydrate data of Hammerschmidt (1934) for propane and isobutane all appear to be outliers on such semilogarithmic plots, because they are at temperatures much too far above the upper quadruple (Q2) point. Obvious outlying data were excluded from this work less obvious outliers may be determined by inspection of the plots and subsequent numerical comparisons. The data, followed by the semilogarithmic plots... [Pg.358]

The mean optical density of this background is then subtracted from that of the peak. Peak intensities, measured of single crystals, are currently claimed accurate to a level of 3% for a range of 0. D < 2.5. For fiber diffraction data, accuracy is not as good. [Pg.95]

To clarify the intent of 21 CFR Part 211.68, CPG5 7132a.07, I/O Checking was published in 1982. According to this CPG, computers I/Os must be tested for data accuracy as part of the computer system qualification and, after the qualification, as part of the computer system s ongoing verification program. [Pg.16]

Systems must be in place to insure data accuracy and integrity with sufficient staff for data review. [Pg.180]

Third, maximum operational capacity data are practically nonaxist-ent at high liquid rates. Efficiency measurements are usually performed at or close to total reflux (liquid to vapor mass ratios of about unity) in order to prevent pinching from impairing data accuracy. In order to obtain data at high liquid rates, liquid to vapor mass ratios of the order of 2 to 3 or more are usually required. [Pg.476]

The impedance spectroscopy is most promising for electrochemical in situ characterization. Many papers have been devoted to the AB5 type MH electrode impedance analysis [15-17]. Prepared pellets with different additives were used for electrochemical measurements and comparing. Experimental data are typically represented by one to three semicircles with a tail at low frequencies. These could be described to the complex structure of the MH electrode, both a chemical structure and porosity [18, 19] and it is also related to the contact between a binder and alloy particles [20]. The author thinks that it is independent from the used electrolyte, the mass of the electrode powder and the preparing procedure of electrode. However, in our case the data accuracy at high frequencies is lower in comparison with the medium frequency region. In the case, the dependence on investigated parameters is small. In Figures 3-5, the electrochemical impedance data are shown as a function of applied potential (1 = -0.35V, 2 = -0.50V and 3 = -0.75V). [Pg.283]

Phenotyping has been performed in some countries with coumarine (not available in all countries), despite some limitations with data accuracy obtained with the analytical methods used (Pelkonen et al. 2000 Cok et al. 2001). The test assesses the amount of 7-hydroxycoumarine (free and conjugated) in urine after ingestion of 2-5 mg coumarine by the subjects. Nicotine has also been used as the probe drug for CYP2A6 in vivo activity testing. [Pg.730]

Application of these procedures to future work will yield transformation rate data of known precision. Additional audits and protocols are necessary to derive data accuracy and validity. One of the shortcomings of previous experiments is they provide only a value of the observable while neglecting these three attributes. The institution of this methodology to chemical transformation data obtained with this system would yield results with known uncertainty for use in models of atmospheric chemistry and physics. Application of the general methodology which comprises the overall measurement process is important not only in the context of measured transformation rates but also in all experiments and programs where the collection of quality data is desired. [Pg.193]

Van der Waals potential functions for non-bonded interactions display an attractive and a repulsive region 93>. Attractive interactions are small, too small to lead to a detectable effect on nitrogen inversion barriers in the present state of data accuracy. The repulsive portion of the curve is however very steep. Thus the presence of bulky substituents leads to appreciable nonbonded repulsions which are stronger in the pyramidal than in the planar state, where repulsions are partially relieved by the opening of the angle 0. As a consequence the pyramidal state is destabilized with respect to the planar TS and the inversion barrier is expected to decrease. [Pg.45]

Evaluation techniques and equipment are as varied as the individual catalytic processes themselves. The long term goal of catalyst evaluation is to reduce the size of the testing equipment consistent with reliable and accurate data as it relates to the commercial process. Invariably, the farther removed in physical size the process simulation attains, the more likely that errors will be introduced which can affect data accuracy, accuracy being defined as commercial observations. In addition, smaller equipment size also places less demand on the physical integrity of a catalyst particle therefore, additional test methods have been developed to simulate these performance characteristics. Despite these very important limitations, laboratory reactors fully eight orders of magnitude (100 million times) smaller are routinely used in research laboratories by both catalyst manufacturers and petroleum refiners. [Pg.26]

Another important result should also be mentioned the rupture of unstable films and formation of black spots occur at the same critical thickness around 30 nm (grey films) for films from aqueous surfactant solutions [54], Fig. 3.13 plots the dependence of the most probable values of hcr and hcr,bi at which black spots form in the grey film. The dispersity of hcr values 0.2 nm can serve as an estimate of the data accuracy. [Pg.119]

The reliance that can be placed in a computer system is fundamentally determined by the integrity of the data it processes. It must be recognized that data accuracy is absolutely vital in the business context. However well an application works, it will be fimdamentally undermined if the data it processes is dubious. Data load is a key task that must be adequately managed to satisfy business and regulatory needs. Loading of data can be broken down into five basic steps data sourcing, data mapping, data collection, data entry, and data verihcation. [Pg.261]

Individuals who perform data checking must be trained in data accuracy as a minimum requirement. Additional training may be necessary as appropriate to the level of checking being performed. [Pg.263]

Check data accuracy and analysis for cnstom nser reports... [Pg.279]

System performance including any system failures — any problems experienced with the system s operation (e.g., user help desk inquiries, system availability, access control, data accuracy)... [Pg.312]

Electronic records are copied, possibly reprocessed, to make them accessible by a new computerized archive system. This can be a large and complex task but has the advantage that the new system is specifically designed for the purpose. This method, however, should not be used where the integrity of the original records being migrated can be disputed, unless data accuracy checks are implemented. Data load requirements are discussed in Chapter 11. [Pg.325]


See other pages where Data accuracy is mentioned: [Pg.722]    [Pg.434]    [Pg.618]    [Pg.306]    [Pg.435]    [Pg.428]    [Pg.510]    [Pg.231]    [Pg.47]    [Pg.204]    [Pg.12]    [Pg.273]    [Pg.221]    [Pg.207]    [Pg.164]    [Pg.250]    [Pg.100]    [Pg.290]    [Pg.168]    [Pg.306]   
See also in sourсe #XX -- [ Pg.131 ]




SEARCH



Accuracy of contact angle data

Accuracy required, of engineering data

Assay accuracy data, poor

CASE with high accuracy data

Crystallographic data, accuracy

Data collection accuracy required

Data handling accuracy

Data sources, accuracy

Engineering data, accuracy

Engineering data, accuracy required

Equilibrium data, accuracy

Estimation and Measurement of Data Accuracy

Experimental data accuracy

Improved IE Accuracy from Data Post-Processing

Predictive maintenance data accuracy

Thermodynamic data accuracy

Water minimization data accuracy

© 2024 chempedia.info