Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

How reliable are measurements

When scientists look at measurements, they want to know how accurate as well as how precise the measurements are. Accuracy refers to how close a measured value is to an accepted value. Precision refers to how close a series of measurements are to one another. Precise measurements might not be accurate, and accurate measurements might not be precise. When you make measurements, you want to aim for both precision and accuracy. [Pg.14]

When you calculate percent error, ignore any plus or minus signs because only the size of the error counts. [Pg.14]

Juan calculated the density of aluminum three times. [Pg.14]

Aluminum has a density of 2.70 g/cm. Calculate the percent error for each trial. [Pg.15]

calculate the error for each trial hy subtracting Juan s measurement from the accepted value (2.70 g/cm ). [Pg.15]

Suppose someone is planning a bicycle trip from Baltimore, Maryland to Washington D.C. The actual mileage will be determined by where the rider starts and ends the trip, and the route taken. While planning the trip, the rider does not need to know the actual mileage. All the rider needs is an estimate, which in this case would he about 39 miles. People need to know when an estimate is acceptable and when it is not. For example, you could use an estimate when buying material to sew curtains for a window. You would need more exact measurements when ordering custom shades for the same window. [Pg.36]

Consider the data in Table 2-3. Students were asked to find the density of an unknown white powder. Each student measured the volume and mass of three separate samples. They reported calculated densities for each trial and an average of the three calculations. The powder was sucrose, also called table sugar, which has a density of 1.59 g/cm. Who collected the most accurate data Student A s measurements are the most accurate because they are closest to the accepted value of 1.59 g/cm. Which student collected the most precise data Student C s measurements are the most precise because they are the closest to one another. [Pg.36]

Recall that precise measurements may not be accurate. Looking at just the average of the densities can be misleading. Based solely on the average, Student B appears to have collected fairly reliable data. However, on closer inspection. Student B s data are neither accurate nor precise. The data are not close to the accepted value and they are not close to one another. [Pg.37]

What factors could account for inaccurate or imprecise data Perhaps Student A did not follow the procedure with consistency. He or she might not have read the graduated cylinder at eye level for each trial. Student C may have made the same slight error with each trial. Perhaps he or she included the mass of the filter paper used to protect the balance pan. Student B may have recorded the wrong data or made a mistake when dividing the mass by the volume. External conditions such as temperature and humidity also can affect the collection of data. [Pg.37]

Percent error The density values reported in Table 2-3 are experimental values, which are values measured during an experiment. The density of sucrose is an accepted value, which is a value that is considered true. To evaluate the accuracy of experimental data, you can calculate the difference between an experimental value and an accepted value. The difference is called an error. The errors for the data in Table 2-3 are listed in Table 2-4. [Pg.37]


As there is a difference between the measurements and the values of the calculated function, we can safely assume that the fitted parameters are not perfect. They are our best estimates for the true parameters and an obvious question is, how reliable are these fitted parameters Are they tightly or they are loosely defined As long as the assumption of random white noise applies, there are formulas that allow the computation of the standard deviation of the fitted parameters. While these answers should always be taken with a grain of salt, they do give an indication of how well defined the parameters are. [Pg.121]

How reliable are the data Are the data reproducible Are the measurement techniques reliable Are the test samples reproducible, reliable, and suitable for rheological measurements ... [Pg.54]

Keller and Sadler s model fits the shape of the scattering curve, but fails by a factor of two in absolute intensity but how reliable are the absolute intensities in neutronscattering measurements A factor of two is quite a lot to laugh off. Yoon and Flory produce a computer-generated stochastic model which buys agreement in shape and intensity at the expense of unacceptably large variation in real space density, as... [Pg.202]

Measure the concentration of analyte in several identical aliquots (portions). The purpose of replicate measurements (repeated measurements) is to assess the variability (uncertainty) in the analysis and to guard against a gross error in the analysis of a single aliquot. The uncertainty of a measurement is as important as the measurement itself because it tells us how reliable the measurement is. If necessary, use different analytical methods on similar samples to make sure that all methods give the same result and that the choice of analytical method is not biasing the result. You may also wish to construct and analyze several different bulk samples to see what variations arise from your sampling procedure. Steps taken to demonstrate the reliability of the analysis are called quality assurance. [Pg.9]

How reliable are K-XRF measurements for bone Pb with respect to intrasubject uncertainty and variability, intersite-within-subject variability, etc. This has been addressed in the aggregate by various investigators (Chettle et al., 2003 Todd, 2000 Todd and Chettle, 2003 Todd et al., 2003). Uppermost were parameters such as subject placement during site irradiation, the influence of overlying skin/tissue, irradiation time, stabihty of measurement in the near term, etc. [Pg.299]

The flowsheet shown in the introduction and that used in connection with a simulation (Section 1.4) provide insights into the pervasiveness of errors at the source, random errors are experienced as an inherent feature of every measurement process. The standard deviation is commonly substituted for a more detailed description of the error distribution (see also Section 1.2), as this suffices in most cases. Systematic errors due to interference or faulty interpretation cannot be detected by statistical methods alone control experiments are necessary. One or more such primary results must usually be inserted into a more or less complex system of equations to obtain the final result (for examples, see Refs. 23, 91-94, 104, 105, 142. The question that imposes itself at this point is how reliable is the final result Two different mechanisms of action must be discussed ... [Pg.169]

The use of our equations will be illustrated by applying them to a few systems from the literature showing simple kinetic behaviour. However, before starting the calculations we must ascertain how reliable the literature data are. There seems little reason to doubt the soundness of the rate data, but almost all the DP data, essential for the calculation of [Pn ], are suspect because in most researches the DP is obtained by GP chromatography (GPC) with a polystyrene (PSt) calibration. The extent of the uncertainty is revealed by two sets of measurements Cho and McGrath [19] found that for poly(nBVE)(PnBVE) the DP determined by vapour pressure osmometry (VPO), which one would favour as closest to the real DPn, is related to that found by GPC, DP(PSt), as... [Pg.716]

Table 2. Scale factors for ab initio model vibrational frequencies adapted from (Scott and Radom 1996). Please note that these scale factors are determined by comparing model and measured frequencies on a set gas-phase molecules dominated by molecules containing low atomic-number elements (H-Cl). These scale factors may not be appropriate for dissolved species and molecules containing heavier elements, and it is always a good idea to directly compare calculated and measured frequencies for each molecule studied. The root-mean-squared (rms) deviation of scaled model frequencies relative to measured frequencies is also shown, giving an indication of how reliable each scale factor is. Table 2. Scale factors for ab initio model vibrational frequencies adapted from (Scott and Radom 1996). Please note that these scale factors are determined by comparing model and measured frequencies on a set gas-phase molecules dominated by molecules containing low atomic-number elements (H-Cl). These scale factors may not be appropriate for dissolved species and molecules containing heavier elements, and it is always a good idea to directly compare calculated and measured frequencies for each molecule studied. The root-mean-squared (rms) deviation of scaled model frequencies relative to measured frequencies is also shown, giving an indication of how reliable each scale factor is.
Accuracy refers to how well a measurement device is calibrated and how many significant figures one can reliably expect. It is the user s responsibility to know how to read his equipment and not interpolate data to be any more accurate (i.e., significant figures) than they really are. [Pg.66]

Tam et al. [141] attempted to determine how reliable and accurate CC and DFT/ TDDFT calculations are for this conformationally flexible molecule. In addition, they explored the sensitivity of the chiroptical response to two different factors. One was the accuracy of the mole fractions, and another was how different were the ORs of individual rotamers calculated at different levels of theory. It was found that with DFT, at the B3LYP/aug-cc-pVDZ level, the optical rotations were overestimated while CC yielded better agreement with experiment [141, 142], The predicted gas phase optical rotation, averaged by CC or DFT mole fractions, were not in good agreement with either gas or solution phase experimental measurements. The DFT calculated optical rotations differed between 15 and 65% from experiment. [Pg.30]

This Handbook aims to explain terminology widely used, and sometimes misused, in analytical chemistry. It provides much more information than the definition of each term but it does not explain how to make measurements. Additionally, it does not attempt to provide comprehensive coverage of all terms concerned with chemistry, instrumentation or analytical science. The authors have addressed primarily those terms associated with the quality assurance, validation and reliability of analytical measurements. The Handbook attempts to place each term in context and put over concepts in a way which is useful to analysts in the laboratory, to students and their teachers, and to authors of scientific papers or books. This approach is particularly important because official definitions produced by many international committees and organisations responsible for developing standards are frequently confusing. In a few cases the wording of these definitions completely obscures their meaning from anyone not already familiar with the terms. [Pg.9]

How relevant and how reliable is the data on which companies are assessed Sources of data can range from predominantly subjective questionnaires to precise scientific measurement. Environmental performance indicators need to be identified and agreed upon, at least on an industry basis. How they are to be measured should be clearly specified. It will always be difficult to compare CERs from large multinationals with those from small and medium enterprises, particularly as the former could be reporting at any one of a number of different levels - site, national, regional, business or corporate. [Pg.76]

The underlying assumption in statistical analysis is that the experimental error is not merely repeated in each measurement, otherwise there would be no gain in multiple observations. For example, when the pure chemical we use as a standard is contaminated (say, with water of crystallization), so that its purity is less than 100%, no amount of chemical calibration with that standard will show the existence of such a bias, even though all conclusions drawn from the measurements will contain consequent, determinate or systematic errors. Systematic errors act uni-directionally, so that their effects do not average out no matter how many repeat measurements are made. Statistics does not deal with systematic errors, but only with their counterparts, indeterminate or random errors. This important limitation of what statistics does, and what it does not, is often overlooked, but should be kept in mind. Unfortunately, the sum-total of all systematic errors is often larger than that of the random ones, in which case statistical error estimates can be very misleading if misinterpreted in terms of the presumed reliability of the answer. The insurance companies know it well, and use exclusion clauses for, say, preexisting illnesses, for war, or for unspecified acts of God , all of which act uni-directionally to increase the covered risk. [Pg.39]

A complete treatment of these complex topics is beyond the present scope. However, it can be said that standards are such (Potvin et al., 1985) that most published works regarding measurement instruments do address quality of measurements to some extent. Validity (i.e., how well does the measurement reflect the intended quantity) and reliability are most often addressed. However, one could easily be left with the impression that these are binary conditions (i.e., measurement is or is not reliable or valid), when in fact a continuum is required to represent these constructs. Of all attributes that relate to measurement quality, reliability is most commonly expressed in quantitative terms. This is perhaps because statistical methods have been defined and promulgated for the computation of so-called reliabiUty coefficients (Winer, 1971). Reliability coefficients range from 0.0 to 1.0, and the implication is that 1.0 indicates a perfectly reliable or repeatable measurement process. Current methods are adequate, at best, for making... [Pg.747]

Assess the risks. What use is to be made of the calculated property Do we need an exact value (accuracy) or are we only trying to avoid major blunders (reliability) What is the impact if the computed property is in error by 1%, 10%, or 100% How accurate and reliable are the data that will be used as inputs to the calculation Remember, no property is ever measured or computed exactly. This means we must understand the problem weU enough to be able to determine the desired accuracy. [Pg.468]


See other pages where How reliable are measurements is mentioned: [Pg.36]    [Pg.37]    [Pg.39]    [Pg.41]    [Pg.49]    [Pg.14]    [Pg.36]    [Pg.37]    [Pg.39]    [Pg.41]    [Pg.49]    [Pg.14]    [Pg.23]    [Pg.435]    [Pg.86]    [Pg.227]    [Pg.48]    [Pg.1160]    [Pg.279]    [Pg.190]    [Pg.206]    [Pg.838]    [Pg.438]    [Pg.198]    [Pg.1067]    [Pg.92]    [Pg.298]    [Pg.157]    [Pg.330]    [Pg.198]    [Pg.543]    [Pg.95]    [Pg.267]    [Pg.1094]    [Pg.59]    [Pg.1195]    [Pg.1202]    [Pg.180]    [Pg.102]   


SEARCH



Measurement reliability

© 2024 chempedia.info