Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Experimental analytical precision

The experimental analytical precision (EAP) is represented by the standard deviation, in terms of concentration unit, of 10 times measurements of the net intensity of the analyte in the same analytical context for a 95.4% confidence level. ... [Pg.74]

Precision Precision is generally limited by the uncertainty in measuring the limiting or peak current. Under most experimental conditions, precisions of+1-3% can be reasonably expected. One exception is the analysis of ultratrace analytes in complex matrices by stripping voltammetry, for which precisions as poor as +25% are possible. [Pg.531]

The second source of partitioning data is experimental equilibration of crystals and liquids followed by microbeam analysis of quenched run products. Starting materials can be natural rocks, or synthetic analogues. In either case it is customary to dope the starting material with the U-series element(s) of interest, in order to enhance analytical precision. Of course, doping levels should not be so high as to trigger trace phase saturation (e g.. [Pg.62]

When the sample contains interferents, their influence on the analyte in the sample is progressively diluted and usually changes. As a consequence, the apparent concentrations are also changed and approaches nonlinearly at a certain concentration value, which is assumed as the final analytical result (Fig. 3.8b). This value is calculated by extrapolation of the nonlinear function fitted to the experimental points. Such a method is not favorable for good analytical precision and accuracy, which is the most serious drawback of the method. On the other hand, if the interference effect is diminished in the course of sample dilution (which is often the case), the analytical result has a chance to be free of this effect and to be accurate without any other additional efforts. [Pg.35]

Treatment of Raw Data. The method of Lagrange multipliers was used to normalize the experimental weight, sulfur, nickel, and vanadium data. A best fit of the experimental data subject to the constraints of 100% recoveries was obtained, using weighting factors determined by the analytical precision of each measurement. [Pg.142]

Mass Balance Considerations. The values of ER for the Fischer assay spent shale are contained in Table V. If it is assumed that the relative standard deviation in the analyses is 10%, then the relative probable error in ER would be 14% if the analytical errors were indeterminant and 20% if the errors were determinant (38). The mass ratio of OS-l/FS is 1.24 as derived from the assay data in Table I. It is not possible to conclude that any trace elements are mobilized from the solid material during the assay retorting. The ER results obtained for arsenic, selenium, and molybdenum indicate the importance of analytical precision in detecting any trace element mobilization during oil shale retorting. The values of RI contained in Table V show a similar dependence on analytical precision. The probable errors in these values are also between 14 and 20% if the relative standard deviation in the analytical results is assumed to be 10%. These results indicate that, within experimental error, none of the trace elements have been lost during Fischer assay. More definitive conclusions on whether elements are mobilized or lost can only be reached with more precise analytical... [Pg.207]

Choi et al. [1] examined the apparent slip effects of water in hydrophobic and hydrophilic microchannels experimentally using precision measurements of flow rate versus pressure drop. They correlated their experimental results to that from analytical solution of flow through a channel with slip velocity at the wall. There was clear difference between the flows of water on a hydrophilic and hydrophobic surface indicating the effect of slip flow (Fig. 2). Neto et al. [3] have reported clear evidence of boundary slip for a sphere-flat geometry from force measurements using atomic force microscopy. The degree of slip is observed to be the function of both liquid viscosity and shear rate (Fig. 4). [Pg.202]

Ah initio and DFT computations of both relative energy and NMR parameters (5, J) are more and more in use also for differentiating imine-amine tautomers. Because NMR parameters can be computed nowadays with analytical precision, they can be simply employed (i) to assign preferred tautomers (by comparison with experimental S and J values) and (ii) to estimate fast tautomeric equilibria from computed tautomeric structures, the NMR parameters of which are not experimentally available because of exchange rates being fasten the NMR timescale. It could be also concluded from practically identical experimental and DFT-computed and N chemical shifts of the enol form of a number of 2-heteroaryl-substituted quinoxalines 30 (Scheme 5.23) that the latter 30(enol) is the predominant tautomer [61]. [Pg.119]

If improvement in precision is claimed for a set of measurements, the variance for the set against which comparison is being made should be placed in the numerator, regardless of magnitude. An experimental F smaller than unity indicates that the claim for improved precision cannot be supported. The technique just given for examining whether the precision varies with the two different analytical procedures, also serves to compare the precision with different materials, or with different operators, laboratories, or sets of equipment. [Pg.204]

The raw data collected during the experiment are then analyzed. Frequently the data must be reduced or transformed to a more readily analyzable form. A statistical treatment of the data is used to evaluate the accuracy and precision of the analysis and to validate the procedure. These results are compared with the criteria established during the design of the experiment, and then the design is reconsidered, additional experimental trials are run, or a solution to the problem is proposed. When a solution is proposed, the results are subject to an external evaluation that may result in a new problem and the beginning of a new analytical cycle. [Pg.6]

The standard deviation s is the square root of the variance graphically, it is the horizontal distance from the mean to the point of inflection of the distribution curve. The standard deviation is thus an experimental measure of precision the larger s is, the flatter the distribution curve, the greater the range of. replicate analytical results, and the Jess precise the method. In Figure 10-1, Method 1 is less precise but more nearly accurate than Method 2. In general, one hopes that a and. r will coincide, and that 5 will be small, but this happy state of affairs need not exist. [Pg.269]

No informative experimental data have been obtained on the precise shape of segment profiles of tethered chains. The only independent tests have come from computer simulations [26], which agree very well with the predictions of SCF theory. Analytical SCF theory has proven difficult to apply to non-flat geometries [141], and full SCF theory in non-Cartesian geometry has been applied only to relatively short chains [142], so that more detailed profile information on these important, nonplanar situations awaits further developments. [Pg.62]

Analytical solutions for the equations of motion are not possible because of the difficulty of specifying the flow pattern and of defining the precise nature of the interaction between the phases. Rapid fluctuations in flow frequently occur and these cannot readily be taken into account. For these reasons, it is necessary for design purposes to use correlations which have been obtained using experimental data. Great care should be taken, however, if these are used outside the limits used in the experimental work. [Pg.188]

Bob is particularly concerned that, although analytical chemistry forms a major part of the UK chemical industry s efforts, it is still not considered by many to be a subject worthy of special consideration. Consequently, experimental design is often not employed when it should be and safeguards to ensure accuracy and precision of analytical measurements are often lacking. He would argue that although the terms accuracy and precision can be defined by rote, their meanings, when applied to analytical measurements, are not appreciated by many members of the scientific community. [Pg.18]

In many analyses, fhe compound(s) of inferesf are found as par of a complex mixfure and fhe role of fhe chromatographic technique is to provide separation of fhe components of that mixture to allow their identification or quantitative determination. From a qualitative perspective, the main limitation of chromatography in isolation is its inability to provide an unequivocal identification of the components of a mixture even if they can be completely separated from each other. Identification is based on the comparison of the retention characteristics, simplistically the retention time, of an unknown with those of reference materials determined under identical experimental conditions. There are, however, so many compounds in existence that even if the retention characteristics of an unknown and a reference material are, within the limits of experimental error, identical, the analyst cannot say with absolute certainty that the two compounds are the same. Despite a range of chromatographic conditions being available to the analyst, it is not always possible to effect complete separation of all of the components of a mixture and this may prevent the precise and accurate quantitative determination of the analyte(s) of interest. [Pg.20]

It is impossible to comment in this case upon the accuracy which relates to the closeness of the experimentally determined value to the true value. What has been determined is the amount of analyte present in the sample introduced into the chromatograph and the results from replicate determinations will give an indication of the precision of the methodology. At each stage of the procedure outlined above, there is the possibility of loss of sample and no attempt has been made to assess the magnitude of any of these losses. [Pg.46]

When an analytical method is being developed, the ultimate requirement is to be able to determine the analyte(s) of interest with adequate accuracy and precision at appropriate levels. There are many examples in the literature of methodology that allows this to be achieved being developed without the need to use complex experimental design simply by varying individual factors that are thought to affect the experimental outcome until the best performance has been obtained. This simple approach assumes that the optimum value of any factor remains the same however other factors are varied, i.e. there is no interaction between factors, but the analyst must be aware that this fundamental assumption is not always valid. [Pg.189]

The fact that APCl and electrospray are soft ionization techniques is often advantageous because the molecular ion alone, in conjunction with HPLC separation, often provides adequate selectivity and sensitivity to allow an analytical method to be developed. Again, method development is important, particularly when more than one analyte is to be determined, when the effect of experimental parameters, such as pH, flow rate, etc., is not likely to be the same for each. Electrospray, in particular, is susceptible to matrix effects and the method of standard additions is often required to provide adequate accuracy and precision. [Pg.290]

The term definitive method is applied to an analytical or measurement method that has a valid and well described theoretical foundation, is based on sound theoretical principles ( first principles ), and has been experimentally demonstrated to have negligible systematic errors and a high level of precision. While a technique may be conceptually definitive, a complete method based on such a technique must be properly applied and must be demonstrated to deserve such a status for each individual application. A definitive method is one in which all major significant parameters have been related by a direct chain of evidence to the base or derived SI units. The property in question is either directly measured in terms of base units of... [Pg.52]

As micro-analytical techniques (performing direct analysis on a <10 mg sample mass) have a particularly distinct demand for very homogeneous CRMs, it becomes necessary to provide element-specific homogeneity information in the CRM certificates. The distribution of elements in a material can be evaluated experimentally by repetitive analysis. The scattering of results from a method with known intrinsic precision is related to the mass of sample consumed for individual analysis. The... [Pg.137]

The two principal experimental apparatuses used to determine the density of a liquid are the pycnometer and the vibrating tube densimeter. The pycnometer method involves measuring the mass of a liquid in a vessel of known volume. The volume of the pycnometer, either at the temperature of measurement or at some reference temperature, is determined using a density standard, usually water or mercury. Using considerable care and a precision analytical balance accurate to 10 5 g, it is possible to achieve densities accurate to a few parts in 10s with a pycnometer having a volume of 25 cm3 to 50 cm3. [Pg.8]

Because calibration is the prerequisite of reliable evaluations and, therefore, of analytical results which are both accurate and precise, calibration itself has to be carried out in a very reliable way. For this reason, the following experimental and fundamental conditions have to be realized ... [Pg.151]

Precision. The precision of the calibration is characterized by the confidence interval cnffyf of the estimated y values at position x, according to Eq. (6.30). In contrast, the precision of analysis is expressed by the prediction intervals prd(y ) and prd(x,), respectively, according to Eqs. (6.32) and (6.33). The precision of analytical results on the basis of experimental calibration is closely related to the adequacy of the calibration model. [Pg.168]

Although the condensation of phenol with formaldehyde has been known for more than 100 years, it is only recently that the reaction could be studied in detail. Recent developments in analytical instrumentation like GC, GPC, HPLC, IR spectroscopy and NMR spectroscopy have made it possible for the intermediates involved in such reactions to be characterized and determined (1.-6). In addition, high speed computers can now be used to simulate the complicated multi-component, multi-path kinetic schemes involved in phenol-formaldehyde reactions (6-27) and optimization routines can be used in conjunction with computer-based models for phenol-formaldehyde reactions to estimate, from experimental data, reaction rates for the various processes involved. The combined use of precise analytical data and of computer-based techniques to analyze such data has been very fruitful. [Pg.288]

A major advantage of the simple model described in this paper lies in its potential applicability to the direct evaluation of experimental data. Unfortunately, it is clear from the form of the typical isotherms, especially those for high polymers (large n) that, even with a simple model, this presents considerable difficulty. The problems can be seen clearly by consideration of some typical polymer adsorption data. Experimental isotherms for the adsorption of commercial polymer flocculants on a kaolin clay are shown in Figure 4. These data were obtained, in the usual way, by determination of residual polymer concentrations after equilibration with the solid. In general, such methods are limited at both extremes of the concentration scale. Serious errors arise at low concentration due to loss in precision of the analytical technique and at high concentration because the amount adsorbed is determined by the difference between two large numbers. [Pg.32]


See other pages where Experimental analytical precision is mentioned: [Pg.418]    [Pg.418]    [Pg.92]    [Pg.32]    [Pg.204]    [Pg.51]    [Pg.202]    [Pg.332]    [Pg.43]    [Pg.324]    [Pg.147]    [Pg.725]    [Pg.273]    [Pg.521]    [Pg.699]    [Pg.699]    [Pg.90]    [Pg.178]    [Pg.189]    [Pg.21]    [Pg.117]    [Pg.111]    [Pg.128]    [Pg.704]    [Pg.168]    [Pg.172]   
See also in sourсe #XX -- [ Pg.74 ]




SEARCH



Analyte precision

Analytical precision

© 2024 chempedia.info