Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quantitative error

X-Ray Photoelectron Spectrometry. X-ray photoelectron spectrometry (XPS) was applied to analyses of the surface composition of polymer-stabilized metal nanoparticles, which was mentioned in the previous section. This is true in the case of bimetallic nanoparticles as well. In addition, the XPS data can support the structural analyses proposed by EXAFS, which often have considerably wide errors. Quantitative XPS data analyses can be carried out by using an intensity factor of each element. Since the photoelectron emitted by x-ray irradiation is measured in XPS, elements located near the surface can preferentially be detected. The quantitative analysis data of PVP-stabilized bimetallic nanoparticles at a 1/1 (mol/mol) ratio are collected in Table 9.1.1. For example, the composition of Pd and Pt near the surface of PVP-stabilized Pd/Pt bimetallic nanoparticles is calculated to be Pd/Pt = 2.06/1 (mol/ mol) by XPS as shown in Table 9.1.1, while the metal composition charged for the preparation is 1/1. Thus, Pd is preferentially detected, suggesting the Pd-shell structure. This result supports the Pt-core/Pd-shell structure. The similar consideration results in the Au-core/Pd-shell and Au-core/Pt-shell structure for PVP-stabilized Au/Pd and Au/Pt bimetallic nanoparticles, respectively (53). [Pg.447]

Comparing the results obtained by the WKB method with the exact solutions for the planar and spherical surface, we find, within 2% error, quantitative agreement in the planar case. For a sphere, we find the same asymptotic dependence of critical adsorption behavior for a wide range of geometries. The main advantage of the WKB method is a unified approach for the various geometries based on the same level of approximations. It can be applied at the same level of complexity to virtually any shape of the polylectrolyte-surface adsorption potential. Recent advances in polyelectrolyte adsorption under confinement [49,167] and adsorption onto low-dielectric interfaces [50] have been presented. [Pg.27]

The first of them to determine the LMA quantitatively and the second - the LF qualitatively Of course, limit of sensitivity of the LF channel depends on the rope type and on its state very close because the LF are detected by signal pulses exceeding over a noise level. The level is less for new ropes (especially for the locked coil ropes) than for multi-strand ropes used (especially for the ropes corroded). Even if a skilled and experienced operator interprets a record, this cannot exclude possible errors completely because of the evaluation subjectivity. Moreover it takes a lot of time for the interpretation. Some of flaw detector producers understand the problem and are intended to develop new instruments using data processing by a computer [6]. [Pg.335]

Several modes of measurement are used in the tomographic system. The most rapid one is applied as an estimation mode, the estimation error being a factor of 1.5 to 2 higher than that one of the conventional mode. With the estimation mode defective section can be detected rapidly and then they can be quantitatively investigated in detail using other modes... [Pg.600]

Computational issues that are pertinent in MD simulations are time complexity of the force calculations and the accuracy of the particle trajectories including other necessary quantitative measures. These two issues overwhelm computational scientists in several ways. MD simulations are done for long time periods and since numerical integration techniques involve discretization errors and stability restrictions which when not put in check, may corrupt the numerical solutions in such a way that they do not have any meaning and therefore, no useful inferences can be drawn from them. Different strategies such as globally stable numerical integrators and multiple time steps implementations have been used in this respect (see [27, 31]). [Pg.484]

The greatest quantitative errors in semi-micro work arise in connection with the measurement of liquids. For this reason the use of microburettes and graduated dropping-tubes is essential cf. pp. 59-60). [Pg.70]

The comparison of flow conductivity coefficients obtained from Equation (5.76) with their counterparts, found assuming flat boundary surfaces in a thin-layer flow, provides a quantitative estimate for the error involved in ignoring the cui"vature of the layer. For highly viscous flows, the derived pressure potential equation should be solved in conjunction with an energy equation, obtained using an asymptotic expansion similar to the outlined procedure. This derivation is routine and to avoid repetition is not given here. [Pg.182]

The simplest approximation to the complete problem is one based only on the electron density, called a local density approximation (LDA). For high-spin systems, this is called the local spin density approximation (LSDA). LDA calculations have been widely used for band structure calculations. Their performance is less impressive for molecular calculations, where both qualitative and quantitative errors are encountered. For example, bonds tend to be too short and too strong. In recent years, LDA, LSDA, and VWN (the Vosko, Wilks, and Nusair functional) have become synonymous in the literature. [Pg.43]

As shown in Figure 4.12c, the limit of identification is selected such that there is an equal probability of type 1 and type 2 errors. The American Chemical Society s Committee on Environmental Analytical Chemistry recommends the limit of quantitation, (Sa)loq> which is defined as ... [Pg.96]

When possible, quantitative analyses are best conducted using external standards. Emission intensity, however, is affected significantly by many parameters, including the temperature of the excitation source and the efficiency of atomization. An increase in temperature of 10 K, for example, results in a 4% change in the fraction of Na atoms present in the 3p excited state. The method of internal standards can be used when variations in source parameters are difficult to control. In this case an internal standard is selected that has an emission line close to that of the analyte to compensate for changes in the temperature of the excitation source. In addition, the internal standard should be subject to the same chemical interferences to compensate for changes in atomization efficiency. To accurately compensate for these errors, the analyte and internal standard emission lines must be monitored simultaneously. The method of standard additions also can be used. [Pg.438]

In the previous section we described several internal methods of quality assessment that provide quantitative estimates of the systematic and random errors present in an analytical system. Now we turn our attention to how this numerical information is incorporated into the written directives of a complete quality assurance program. Two approaches to developing quality assurance programs have been described a prescriptive approach, in which an exact method of quality assessment is prescribed and a performance-based approach, in which any form of quality assessment is acceptable, provided that an acceptable level of statistical control can be demonstrated. [Pg.712]

Chemical analysis of the metal can serve various purposes. For the determination of the metal-alloy composition, a variety of techniques has been used. In the past, wet-chemical analysis was often employed, but the significant size of the sample needed was a primary drawback. Nondestmctive, energy-dispersive x-ray fluorescence spectrometry is often used when no high precision is needed. However, this technique only allows a surface analysis, and significant surface phenomena such as preferential enrichments and depletions, which often occur in objects having a burial history, can cause serious errors. For more precise quantitative analyses samples have to be removed from below the surface to be analyzed by means of atomic absorption (82), spectrographic techniques (78,83), etc. [Pg.421]

Thermal decomposition of perchlorate salts to chloride, followed by the gravimetric determination of the resulting chloride, is a standard method of determining quantitatively the concentration of perchlorates. Any chlorates that are present in the original sample also break down to chloride. Thus results are adjusted to eliminate errors introduced by the presence of any chlorides and chlorates in the original sample. [Pg.68]

Process Hazards Analysis. Analysis of processes for unrecogni2ed or inadequately controUed ha2ards (see Hazard analysis and risk assessment) is required by OSHA (36). The principal methods of analysis, in an approximate ascending order of intensity, are what-if checklist failure modes and effects ha2ard and operabiHty (HAZOP) and fault-tree analysis. Other complementary methods include human error prediction and cost/benefit analysis. The HAZOP method is the most popular as of 1995 because it can be used to identify ha2ards, pinpoint their causes and consequences, and disclose the need for protective systems. Fault-tree analysis is the method to be used if a quantitative evaluation of operational safety is needed to justify the implementation of process improvements. [Pg.102]

Although the most sensitive line for cadmium in the arc or spark spectmm is at 228.8 nm, the line at 326.1 nm is more convenient to use for spectroscopic detection. The limit of detection at this wavelength amounts to 0.001% cadmium with ordinary techniques and 0.00001% using specialized methods. Determination in concentrations up to 10% is accompHshed by solubilization of the sample followed by atomic absorption measurement. The range can be extended to still higher cadmium levels provided that a relative error of 0.5% is acceptable. Another quantitative analysis method is by titration at pH 10 with a standard solution of ethylenediarninetetraacetic acid (EDTA) and Eriochrome Black T indicator. Zinc interferes and therefore must first be removed. [Pg.388]

For a sequenee of reaetion steps two more eoneepts will be used in kinetics, besides the previous rules for single reaetions. One is the steady-state approximation and the seeond is the rate limiting step eoneept. These two are in strict sense incompatible, yet assumption of both causes little error. Both were explained on Figure 6.1.1 Boudart (1968) credits Kenzi Tamaru with the graphical representation of reaction sequences. Here this will be used quantitatively on a logarithmic scale. [Pg.123]


See other pages where Quantitative error is mentioned: [Pg.214]    [Pg.26]    [Pg.441]    [Pg.467]    [Pg.48]    [Pg.214]    [Pg.26]    [Pg.441]    [Pg.467]    [Pg.48]    [Pg.679]    [Pg.888]    [Pg.59]    [Pg.136]    [Pg.462]    [Pg.495]    [Pg.75]    [Pg.192]    [Pg.190]    [Pg.60]    [Pg.243]    [Pg.247]    [Pg.302]    [Pg.365]    [Pg.634]    [Pg.667]    [Pg.774]    [Pg.59]    [Pg.382]    [Pg.421]    [Pg.327]    [Pg.505]    [Pg.2270]    [Pg.2306]    [Pg.36]    [Pg.133]    [Pg.15]   
See also in sourсe #XX -- [ Pg.452 , Pg.453 , Pg.454 , Pg.455 , Pg.456 , Pg.457 ]




SEARCH



Errors in quantitative analysis

Qualitative and Quantitative Prediction of Human Error in Risk Assessment

Quantitative analysis errors

© 2024 chempedia.info