Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Problems with measurement techniques

Measurement of specific activity. The half-life of a nuclide can be readily calculated if both the number of atoms and their rate of decay can be measured, i.e., if the activity A and the number of atoms of P can be measured, then X is known from A = XP. As instrumentation for both atom counting and decay counting has improved in recent decades, this approach has become the dominant method of assessing half-lives. Potential problems with this technique include the accurate and precise calibration of decay-counter efficiency and ensuring sufficient purity of the nuclide of interest. This technique provides the presently used half-lives for many nuclides, including those for the parents of the three decay chains, U, U (Jaffey et al. 1971), and Th. [Pg.15]

The problem with this technique and the reason why it has been so slow to develop is that it is extremely difficult to measure W isotopic compositions by TIMS because of the very high first ionization potential (7.98 eV). Negative ion techniques have met with some success (Volkening et al., 1991 Harper et al., 1991a Harper and Jacobsen, 1996 Jacobsen and Harper, 1996). However, W isotopic compositions can be measured relatively easily by using MC-ICP-MS (Halliday et al., 1995 Lee and Halliday, 1995a,b, 1996, 1997 Lee et al., 1997). [Pg.310]

An alternative technique for the measurement of sc is to study the photopotential. The theory of this is discussed in some detail in Sect. 7, but the essential features of the measurement are shown in Fig. 16. After equilibration in the dark, when the potential of the electrode at open circuit becomes equal to the redox potential Vredox, the light is turned on and the electrode potential changes at open circuit in such a way that the bands become flat. There are many problems with this technique and it is considerably less reliable than a properly conducted a.c. experiment, but it may give a reasonably accurate picture if surface recombination is small (vide infra). Some results for p-GaAs in aqueous solution are shown in Fig. 17 and the S values derived are of the order 0.7, though the dispersion apparent in Fig. 17 makes a quantitative interpretation difficult. [Pg.89]

Now we move on to consider the analysis of copolymers. There are usually two things we would like to know. First, the composition of the copolymer and, second, some measure of sequence distributions. Again, in the early years, before the advent of commercial NMR instruments, infrared spectroscopy was the most widely used tool. The problem with the technique is that it requires that the spectrum contain bands that can be unambiguously assigned to specific functional groups, as in the (transmission) spectrum of an acrylonitrile/methyl methacrylate copolymer shown in Figure 7-43 (you can tell this is a really old spectrum, not only because it is plotted in transmission, but also because the frequency scale is in microns). [Pg.197]

In the acetylene inhibition technique, acetylene is added to a water sample, which inhibits the reduction of N2O to N2 (Sorensen, 1978). The accumulation of N2O is then measured using gas chromatography and an electron capture detector and the denitrification rate is taken to be equal to the total N2O flux. One potential problem is incomplete inhibition of N2O reduction to N2, particularly in the presence of hydrogen sulfide, a compound commonly found under anaerobic conditions. Another potential problem with the technique is that acetylene also inhibits nitrification, a process that often supplies the NOs and N02 substrates for denitrification. To inhibit nitrification is to inhibit denitrification if it is at aU substrate limited (Hynes and Knowles, 1978). [Pg.1254]

Problems with the technique essentially arise from the mode of measurements for the following reasons ... [Pg.528]

The goal of mixing measurement techniques is the acquisition and statistical analysis of data collected from samples in order to evaluate the quality of the process and the final product. These techniques are usually time consuming and laborious. One common problem in measurement techniques that examine a planar section of the product is that two-dimensional information should be transformed into three-dimensional information. This is done with the help of stereology science (Underwood, 1977). [Pg.163]

The single most severe drawback to reflectivity techniques in general is that the concentration profile in a specimen is not measured directly. Reflectivity is the optical transform of the concentration profile in the specimen. Since the reflectivity measured is an intensity of reflected neutrons, phase information is lost and one encounters the e-old inverse problem. However, the use of reflectivity with other techniques that place constraints on the concentration profiles circumvents this problem. [Pg.661]

Surface analysis by non-resonant (NR-) laser-SNMS [3.102-3.106] has been used to improve ionization efficiency while retaining the advantages of probing the neutral component. In NR-laser-SNMS, an intense laser beam is used to ionize, non-selec-tively, all atoms and molecules within the volume intersected by the laser beam (Eig. 3.40b). With sufficient laser power density it is possible to saturate the ionization process. Eor NR-laser-SNMS adequate power densities are typically achieved in a small volume only at the focus of the laser beam. This limits sensitivity and leads to problems with quantification, because of the differences between the effective ionization volumes of different elements. The non-resonant post-ionization technique provides rapid, multi-element, and molecular survey measurements with significantly improved ionization efficiency over SIMS, although it still suffers from isoba-ric interferences. [Pg.132]

The choice of method from available resources depends largely upon the properties of the material to be analyzed, the basic significance or physical wearing of the measurement, and the purpose for which the information is required. For example, failure to disperse the particles as discrete entities is the biggest single problem in all size analysis methods that depend on individual particulate behavior. With microscopic techniques particles must be dispersed on the slide to permit observation of individual particles, and in sedimentation techniques the material must be suspended in the fluid so that the particles behave as individuals and not as floes. [Pg.498]

The main problem in Eas0 vs. correlations is that the two experimental quantities are as a rule measured in different laboratories with different techniques. In view of the sensitivity of both parameters to the surface state of the metal, their uncertainties can in principle result of the same order of magnitude as AX between two metals. On the other hand, it is rare that the same laboratory is equipped for measuring both single-crystal face is not followed by a check of its perfection by means of appropriate spectroscopic techniques. In these cases we actually have nominal single-crystal faces. This is probably the reason for the observation of some discrepancies between differently prepared samples with the same nominal surface structure. Fortunately, there have been a few cases in which both Ea=0 and 0 have been measured in the same laboratory these will be examined later. Such measurements have enabled the resolution of controversies that have long persisted because of the basic criticism of Eazm0 vs. 0 plots. [Pg.157]

LDV is the traditional method using tracer particles to measure velocity and one-point statistics of turbulent properties [2]. It is still a very useful technique and has the advantage that it can measure closer to walls compared to PIV. An inherent problem with LDV is that it does not measure at a specific point but rather at places... [Pg.332]

Although the decomposition of a data table yields the elution profiles of the individual compounds, a calibration step is still required to transform peak areas into concentrations. Essentially we can follow two approaches. The first one is to start with a decomposition of the peak cluster by one of the techniques described before, followed by the integration of the peak of the analyte. By comparing the peak area with those obtained for a number of standards we obtain the amount. One should realize that the decomposition step is necessary because the interfering compound is unknown. The second approach is to directly calibrate the method by RAFA, RBL or GRAFA or to decompose the three-way table by Parafac. A serious problem with these methods is that the data sets measured for the sample and for the standard solution should be perfectly synchronized. [Pg.303]


See other pages where Problems with measurement techniques is mentioned: [Pg.91]    [Pg.1255]    [Pg.152]    [Pg.178]    [Pg.366]    [Pg.3369]    [Pg.34]    [Pg.236]    [Pg.308]    [Pg.221]    [Pg.539]    [Pg.391]    [Pg.1800]    [Pg.350]    [Pg.58]    [Pg.1148]    [Pg.175]    [Pg.6]    [Pg.102]    [Pg.25]    [Pg.1800]    [Pg.384]    [Pg.14]    [Pg.179]    [Pg.743]    [Pg.134]    [Pg.193]    [Pg.141]    [Pg.124]    [Pg.233]    [Pg.17]    [Pg.499]    [Pg.90]    [Pg.89]    [Pg.89]    [Pg.138]    [Pg.226]    [Pg.575]   
See also in sourсe #XX -- [ Pg.72 ]




SEARCH



Measurement problem

Measurements with

Problems techniques

Problems with measurement techniques faces

Problems with)

© 2024 chempedia.info