Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Measurement Process

In general, whenever any quantitative measurement is made the value obtained is only an estimate of the true value of the property being measured. Many factors will cause the value obtained to differ from the true value. These can be summarized as follows  [Pg.156]

The result of a quantitative chemical measurement is not an end in itself. It has a cost and therefore it always has a purpose. It may be used, for example, in checking products against specifications or legal limits, to determine the yield of a reaction, or to estimate monetary value. [Pg.156]

Whatever the reason for obtaining it, the result of a chemical measurement has a certain importance since decisions based upon it will very often need to be made. These decisions may well have implications for the health or livelihood of millions of people. In addition, with the increasing liberalization of world trade, there is pressure to eliminate the replication of effort in testing products [Pg.156]

It is clear then that some indicator of quality is required if chemical measurements are to be used with confidence. Such an indicator must  [Pg.157]

An indicator that meets these requirements is measurement uncertainty. [Pg.157]


The first equation (1) is the equation of state and the second equation (2) is derived from the measurement process. Finally, G5 (r,r ) is a row-vector that takes the three components of the anomalous ciurent density vector Je (r) = normal component of the induced magnetic field. This system is non hnear (bilinear) because the product of the two unknowns /(r) and E(r) is present. [Pg.328]

When designing and evaluating an analytical method, we usually make three separate considerations of experimental error. First, before beginning an analysis, errors associated with each measurement are evaluated to ensure that their cumulative effect will not limit the utility of the analysis. Errors known or believed to affect the result can then be minimized. Second, during the analysis the measurement process is monitored, ensuring that it remains under control. Finally, at the end of the analysis the quality of the measurements and the result are evaluated and compared with the original design criteria. This chapter is an introduction to the sources and evaluation of errors in analytical measurements, the effect of measurement error on the result of an analysis, and the statistical analysis of data. [Pg.53]

Determine whether there is any evidence that the measurement process is not under statistical control at a = 0.05. [Pg.87]

The variance for the sample of ten tablets is 4.3. A two-tailed significance test is used since the measurement process is considered out of statistical control if the sample s variance is either too good or too poor. The null hypothesis and alternative hypotheses are... [Pg.87]

Selectivity in FIA is often better than that for conventional methods of analysis. In many cases this is due to the kinetic nature of the measurement process, in which potential interferents may react more slowly than the analyte. Contamination from external sources also is less of a problem since reagents are stored in closed reservoirs and are pumped through a system of transport tubing that, except for waste lines, is closed to the environment. [Pg.658]

Control charts were originally developed in the 1920s as a quality assurance tool for the control of manufactured products.Two types of control charts are commonly used in quality assurance a property control chart in which results for single measurements, or the means for several replicate measurements, are plotted sequentially and a precision control chart in which ranges or standard deviations are plotted sequentially. In either case, the control chart consists of a line representing the mean value for the measured property or the precision, and two or more boundary lines whose positions are determined by the precision of the measurement process. The position of the data points about the boundary lines determines whether the system is in statistical control. [Pg.714]

Other problems occur in the measurement of pH in unbuffered, low ionic strength media such as wet deposition (acid rain) and natural freshwaters (see Airpollution Groundwatermonitoring) (13). In these cases, studies have demonstrated that the principal sources of the measurement errors are associated with the performance of the reference electrode Hquid junction, changes in the sample pH during storage, and the nature of the standards used in caHbration. Considerable care must be exercised in all aspects of the measurement process to assure the quaHty of the pH values on these types of samples. [Pg.466]

Two properties, in particular, make Feynman s approach superior to Benioff s (1) it is time independent, and (2) interactions between all logical variables are strictly local. It is also interesting to note that in Feynman s approach, quantum uncertainty (in the computation) resides not in the correctness of the final answer, but, effectively, in the time it takes for the computation to be completed. Peres [peres85] points out that quantum computers may be susceptible to a new kind of error since, in order to actually obtain the result of a computation, there must at some point be a macroscopic measurement of the quantum mechanical system to convert the data stored in the wave function into useful information, any imperfection in the measurement process would lead to an imperfect data readout. Peres overcomes this difficulty by constructing an error-correcting variant of Feynman s model. He also estimates the minimum amount of entropy that must be dissipated at a given noise level and tolerated error rate. [Pg.676]

Because of our inability to analyze the interaction of microscopic QM systems and macroscopic measuring devices to a sufficient degree, we make use of a set of empirical rules that are known as measurement theory. Some day, measurement theory will become a proven set of theorems in QM,, as the proponents of the decoherence theory, among others, claim. Until such time, it is beneficial to introduce the measurement process, and the principles associated with it, separately from the dynamics described by the Schrbdinger equation. [Pg.27]

This separation will allow the students to properly assess the measurement process, which plays a special and complex role in QM that is different from its role in any classical theory. Just as Kepler s laws only cover the free-falling part of the trajectories and the course corrections, essential as they may be, require tabulated data, so too in QM, it should be made clear that the Schrbdinger equation governs the dynamics of QM systems only and measurements, for now, must be treated by separate mles. Thus the problem of inaccurate boundaries of applicability can be addressed by clearly separating the two incompatible principles governing the change of the wave function the Schrbdinger equation for smooth evolution as one, and the measurement process with the collapse of the wave function as the other. [Pg.27]

Imprecise boundaries. The basic concept of the state of a system is governed by two mutually incompatible laws, namely the Schrodinger equation for normal dynamics and the measurement process for interactions with macroscopic devices. It is not made clear where the applicability of one ends and the other begins. [Pg.30]

If the sample and standard have essentially the same matrices (e.g., air particulates or river sediments), one can go through the total measurement process with both the sample and the standard in order to (a) check the accuracy of the measurement process used (compare the concentration values obtained for the standard with the certified values) and (b) obtain some confidence about the accuracy of the concentration measurements on the unknown sample since both have gone through the same chemical measurement process (except sample collection). It is not recommended, however, that pure standards be used to standardize the total chemical measurement process for natural matrix type samples chemical concentrations in the natural matrices could be seriously misread, especially since the pure PAH probably would be totally extracted in a given solvent, whereas the PAH in the matrix material probably would not be. All the parameters and matrix effects. Including extraction efficiencies, are carefully checked in the certification process leading to SRM s. [Pg.119]

The inevitability of systematic and random errors in the measurement process, somewhat loosely circumscribed by drift and noise, means that Xmean and Sj can only be approximations to the true values. Thus the results found in the preceding section can be viewed under three different perspectives ... [Pg.27]

For analytical applications it is important to realize that three distributions are involved, namely one that describes the measurement process, one that brings in the sampling error, and another that characterizes the sam-... [Pg.27]

The measurement has noise superimposed on it, so that the analyst decides to repeat the measurement process several times, and to evaluate the mean and its confidence limits after every determination. (Note This modus operandi is forbidden under GMP the necessary number of measurements and the evaluation scheme must be laid down before the experiments are done.) The simulation is carried out according to the scheme depicted in Fig. 1.19. The computer program that corresponds to the scheme principally contains all of the simulation elements however, some simplifications can be introduced ... [Pg.41]

Instrumental color measurements eliminate subjectivity, are more precise, take less time, and are simpler to perform. However, to evaluate instrumental results properly, the physics of the measurement processes must be considered. Three types of color measurement instruments are used for food the monochromatic colorimeter, the tristimulus colorimeter, and the colorimetric spectrophotometer. [Pg.522]

The quantities AUMC and AUSC can be regarded as the first and second statistical moments of the plasma concentration curve. These two moments have an equivalent in descriptive statistics, where they define the mean and variance, respectively, in the case of a stochastic distribution of frequencies (Section 3.2). From the above considerations it appears that the statistical moment method strongly depends on numerical integration of the plasma concentration curve Cp(r) and its product with t and (r-MRT). Multiplication by t and (r-MRT) tends to amplify the errors in the plasma concentration Cp(r) at larger values of t. As a consequence, the estimation of the statistical moments critically depends on the precision of the measurement process that is used in the determination of the plasma concentration values. This contrasts with compartmental analysis, where the parameters of the model are estimated by means of least squares regression. [Pg.498]

The alternative to compartmental analysis is statistical moment analysis. We have already indicated that the results of this approach strongly depend on the accuracy of the measurement process, especially for the estimation of the higher order moments. In view of the limitations of both methods, compartmental and statistical, it is recommended that both approaches be applied in parallel, whenever possible. Each method may contribute information that is not provided by the other. The result of compartmental analysis may fit closely to the data using a model that is inadequate [12]. Statistical moment theory may provide a model which is closer to reality, although being less accurate. The latter point has been made in paradigmatic form by Thom [13] and is represented in Fig. 39.16. [Pg.501]

The three historical approaches to certification mentioned above were recently expanded to identily seven modes that are used at NIST for value assignment for chemical composition (May et al. 2000). These seven modes and the resulting values are summarized in Table 3.13. The basic principles of value assignment remain unchanged however, these modes now provide a well-defined link between the process used for value assignment and the definition of the assigned value (i.e. certified, reference, or information value). The terms described above provide a clear indication of the level of confidence that NIST has in the accuracy of the assigned value. The definition of a certified value implies that NIST must be involved in the measurement process for the value to be classified as a NIST certified value (see modes 1-3 in Table 3.13). Thus, modes 4 and 7, which do not involve NIST measure-... [Pg.89]

Since the early 1970 s there has been a growing belief that chemical measurements must not only be done correctly, but that data, the product of the measurement process, must be seen to be accurate, precise, and reliable. Analytical data have become another manufactured product and like all manufactured products, the customers demand that Quality Assurance (QA) must be built in. [Pg.236]

To the users of CRMs, the concept of traceabDity" is very closely related to the statistical considerations in the measurement process and the quality of the measurements in the users laboratory. Traceability is defined in the international vocabulary on metrology (VIM) as ... [Pg.249]

Our approach to determine the properties of heterogeneous media utilizes mathematical models of the measurement process and, as appropriate, the flow process itself. To determine the desired properties, we solve an associated system and parameter identification problem (also termed an inverse problem) to estimate the properties from the measured data. [Pg.359]

Not only components from the set A,...,N can interfere with the analytes but also additional species which may be partly unknown or will be formed during the measuring process. Such situations occur especially in ICP-MS, where the signal of an isotope A, may be interfered by isotopes of other elements, Bj, Q etc., and additionally by molecule ions formed in the plasma (e.g., argon compound ions and ions formed from solvent constituents). [Pg.217]

The use of external chemical standards is suitable for many applications. Ideally, chemical standards should be matrix-matched with samples to ensure that they respond to the measurement process in the same way as the samples. In some cases, a sample preparation and measurement process has inherent faults... [Pg.111]

Internal standardization involves adding a chemical standard to the sample solution so that standard and sample are effectively measured at the same time. Internal chemical standards can be either the actual analyte, an isotopically labelled analyte or a related substance. The last one is usually chosen as something expected to be absent from the sample yet expected to behave towards the measurement process in a way similar to the analyte. There are a number of different ways of using internal standards and they sometimes serve a different purpose. [Pg.112]

As mentioned in the last section, when a related substance is added to both the chemical standards and to the samples problems with variations in injection volumes are removed. There is another use for internal standards of this kind, i.e. where the standard acts as an internal calibrant. The internal standard has to behave in the same way as the sample in relation to the measurement process, except that the signals can be distinguished from each other. When the related substance is added early on in the measurement process, any losses of analyte as a result of the measurement process are equally likely to affect the chemical standard and the analyte. Thus, no adjustment to the result, to compensate, e.g. for poor recovery, is necessary. The concentration of the sample is obtained from the ratio of the two signals (one from the standard and one from the sample). [Pg.112]

Precision is the closeness of agreement between independent test results obtained under stipulated conditions. The precision tells us by how much we can expect the results of repeated measurements to vary. The precision of a set of measurement results will depend on the magnitude of the random errors affecting the measurement process. Precision is normally expressed as a standard deviation or relative standard deviation (see Section 6.1.3). [Pg.159]

A second use for uncertainty values lies in their potential for helping us to improve our experimental procedures. In calculating the uncertainty for a measurement, we will have assembled a list of standard uncertainties for the variables of the measurement model. If we wish to improve the quality of our measurement, we must look first at the component of the measurement system contributing the largest uncertainty. If this is the dominant contribution to the combined uncertainty, then any attempt to improve other aspects of the measurement process will be a waste of time. By attempting to reduce the size of the dominant uncertainty first, we will produce the greatest return for our effort. [Pg.176]


See other pages where The Measurement Process is mentioned: [Pg.709]    [Pg.719]    [Pg.19]    [Pg.396]    [Pg.245]    [Pg.105]    [Pg.106]    [Pg.576]    [Pg.3]    [Pg.49]    [Pg.51]    [Pg.53]    [Pg.58]    [Pg.250]    [Pg.36]    [Pg.328]    [Pg.223]    [Pg.331]    [Pg.266]    [Pg.15]    [Pg.154]    [Pg.156]   


SEARCH



Process measures

The Performance Measurement Process

The RGA as a Measure of Process Sensitivity to Uncertainty

© 2024 chempedia.info