Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Fundamental parameters approach

Quantitative XRF analysis has developed from specific to universal methods. At the time of poor computational facilities, methods were limited to the determination of few elements in well-defined concentration ranges by statistical treatment of experimental data from reference material (linear or second order curves), or by compensation methods (dilution, internal standards, etc.). Later, semi-empirical influence coefficient methods were introduced. Universality came about by the development of fundamental parameter approaches for the correction of total matrix effects... [Pg.631]

Alternatively, fundamental parameter methods (FPM) may be used to simulate analytical calibrations for homogeneous materials. From a theoretical point of view, there is a wide choice of equivalent fundamental algorithms for converting intensities to concentrations in quantitative XRF analysis. The fundamental parameters approach was originally proposed by Criss and Birks [239]. A number of assumptions underlie the application of theoretical methods, namely that the specimens be thick, flat and homogeneous, and that, for calibration purposes, the concentrations of all the elements in the reference material be known (having been determined by alternative methods). The classical formalism proposed by Criss and Birks [239] is equivalent to the fundamental influence coefficient formalisms (see ref. [232]). In contrast to empirical influence coefficient methods, in which the experimental intensities from reference materials are used to compute the values of the coefficients, the fundamental influence coefficient approach calculates... [Pg.632]

The fundamental parameters approach has been recently implemented in some of the current programs. It gives, to a certain extent, physical meaning to the parameters involved in pattern decomposition. This method attempts to model the contributions from various instrumental components (such as monochromators, slits) and geometry to the observed peak profile shapes. Since this is done considering the relevant physics involved from the generation, diffraction, and detection of PXRD, it leads to more physical meaning than the two methods described earlier. This method can be used not only to perform full pattern decomposition, but also effectively to do a standardless refinement of the sample effects such as crystallite size and microstrain. [Pg.6433]

FIGURE 10.6 Result of the fitting using the fundamental parameter approach in ZnO/Si02 powder (2 1). The black and gray lines correspond to the measured and calculated XRD patterns, respectively. (From Tani, T., Madler, L., and Pratsinis, S.E., Synthesis of zinc oxide/silica composite nanoparticles by flame spray pyrolysis, J. Mater. Set, 37, 4627, 2002.)... [Pg.31]

Cheary, R.W., and Coelho, A., A fundamental parameters approach to x-ray line-profile fitting, J. Appl. Crystallogr, 25, 109, 1992. [Pg.50]

In general, three different approaches to the description of peak shapes can be used. The first employs empirical peak shape functions, which fit the profile without attempting to associate their parameters with physical quantities. The second is a semi-empirical approach that describes instrumental and wavelength dispersion functions using empirical functions, while specimen properties are modeled using realistic physical parameters. In the third, the so-called fundamental parameters approach, all three components of the peak shape function (Eq. 2.45) are modeled using rational physical quantities. [Pg.172]

From this point of view, some applications of the modified pseudo-Voigt function (e.g. third and fourth peak shape functions employed in GSAS) are in a way similar to the fundamental parameters approach as they use instrumental parameters to describe certain aspects of peak shape. [Pg.181]

R.W. Cheary and A. Coelho. A fundamental parameters approach to X-ray line-profile fitting, J. Appl. Cryst. 25, 109 (1992) R.W. Cheary and A.A. Coelho. Axial divergence in a conventional x-ray powder diffractometer. II. Realization and evaluation in a fundamental-parameter profile fitting procedure, J. Appl. Cryst. 31, 862 (1998). [Pg.181]

The above description is actually a simplified version of reality since a high-resolution analysis of the spectral lines of Cu Koc shows that both the oci and 0C2 peaks are distinctly asymmetric. An understanding of the origin of this asymmetry is important in implementing the so-called fundamental parameters approach to the profile fitting of powder diffraction data peaks, described in Chapters 5, 6, 9 and 13, in which the detailed spectrum of the incident X-rays must be known. A combination of five Lorentzian functions is commonly used to model the peak shape of Cu radiation, though detailed investigations to characterize the X-ray spectrum continue. ... [Pg.24]

A viable alternative is using the so-called Fundamental Parameters Approach to synthesize the IP (Chapters 5 and 6). In fact, the IP is itself given by a convolution of profiles, chiefly those produced by wavelength dispersion, optical components and absorption. If the geometry of the PD instrument is known and sufficiently accurate, the FPA can provide a calculated IP without the need for using powder standards. [Pg.387]

Effective fraction of edge dislocations Fundamental Parameters Approach... [Pg.405]

Topas A fundamental parameters approach to X ray line profile fitting,... [Pg.532]

But for methods without internal standard, care must be taken to matrix-match the standards and the unknowns, and to work with constant weight in the sample cups. The alternative procedure is to account for the wedge effect and to use a full fundamental parameter approach to the analysis. [Pg.106]

Samples must be solid and may be in almost any form. Thin films, bulk solids, particles, powders, machined pieces, and small objects (including biological specimens) can be analyzed. All elements from beryllium (Z = 4) to U can be determined at concentrations of about 100 ppm or greater. For qualitative analysis, the surface finish of the sample is not important. For quantitative analysis the surface of the specimen must be flat. A common method for achieving a flat surface for an SEM sample is to embed the sample in epoxy and then carefully polish the hardened epoxy to expose a flat surface of the sample. Calibration standards should have flat surfaces as well, and the composition of the standards should be similar to that of the samples. Alternatively, a fundamental parameters approach using pure element standards can be used. [Pg.594]

By 1953 a number of automated x-ray spectrometers were in use, of which the Philips Autrometer was typical. This 25-channel sequential machine was programmed by a combination of switches, servo-motors, and mechanical stops that required many hours of careful mechanical adjustment to set up. By the early 1960s multichannel spectrometers were also beginning to appear. With the need for greater accuracy, in x-ray fluorescence (XRF) especially, came the need for matrix correction. Early work at, for example, the British Non-Ferrous Metals Research Association, employed a table of correction factors that could be applied with a slide rule. From this grew the Lucas-Tooth/Price intensity correction models [3]— linear equations requiring only simple computers. Soon after came the concentration correction models of Lachance and Traill [4], and Rasberry and Heinrich [5]. These concentration correction models needed matrix inversion, thus more computation. Next came Criss s fundamental parameter approach [6], which derived from earlier work by Sherman [7]. [Pg.243]

The problem in adequately describing the spectral distribution of the primary radiation inhibited the development of absolute intensity/concentration algorithms until 1968, when Birk s group at the U.S. Naval Research Laboratory published details of their fundamental parameters approach [30]. This method differed from the previously published absolute methods principally in the use of measured primary spectra [5] rather than in calculated data. The value of such an absolute method... [Pg.367]

In the analysis of infinitely thick specimens, one can also utilize experimental thick-target sensitivity factors instead of relying on the fundamental parameter approach or on experimental thin-target sensitivities. The thick-target sensitivity factors incorporate the integral of eqn [5] and are usually expressed in X-ray counts per pC and per pgperg. They are commonly derived from PIXE measurements on standard samples. In a strict sense, the thick-target factors are only valid for the analysis of unknown samples with identical (matrix) composition as the standards, but in practice, some variability in composition can be tolerated or corrected for. [Pg.5218]

To convert the peak intensities into the elemental mass concentrations, a fundamental parameter approach is used. According to this approach, the intensity Nij of the fluorescent X-ray line i of the jth element, is related to the mass ruj of the element present in the sample... [Pg.52]


See other pages where Fundamental parameters approach is mentioned: [Pg.6432]    [Pg.6433]    [Pg.28]    [Pg.29]    [Pg.181]    [Pg.17]    [Pg.136]    [Pg.167]    [Pg.317]    [Pg.324]    [Pg.6431]    [Pg.6432]    [Pg.938]    [Pg.367]    [Pg.5178]    [Pg.5178]    [Pg.5202]    [Pg.86]    [Pg.95]   
See also in sourсe #XX -- [ Pg.181 ]




SEARCH



© 2024 chempedia.info