Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Samples factor analysis

Precision When the analyte s concentration is well above the detection limit, the relative standard deviation for fluorescence is usually 0.5-2%. The limiting instrumental factor affecting precision is the stability of the excitation source. The precision for phosphorescence is often limited by reproducibility in preparing samples for analysis, with relative standard deviations of 5-10% being common. [Pg.432]

Comparison between flame-sampled PIE curves for (a) m/z = 90 (C H ) and (b) m/z = 92 (C Hg) with the PIE spectra simulated based on a Franck-Condon factor analysis and the cold-flow PIE spectrum of toluene. Calculated ionization energies of some isomers are indicated. (From Hansen, N. et al., /. Phys. Chem. A, 2007. With permission.)... [Pg.9]

In 1978, Ho et al. [33] published an algorithm for rank annihilation factor analysis. The procedure requires two bilinear data sets, a calibration standard set Xj and a sample set X . The calibration set is obtained by measuring a standard mixture which contains known amounts of the analytes of interest. The sample set contains the measurements of the sample in which the analytes have to be quantified. Let us assume that we are only interested in one analyte. By a PCA we obtain the rank R of the data matrix X which is theoretically equal to 1 + n, where rt is the number of interfering compounds. Because the calibration set contains only one compound, its rank R is equal to one. [Pg.298]

P.K. Hopke, D.J. Alpert and B.A. Roscoe, FANTASIA — A program for target transformation factor analysis to apportion sources in environmental samples. Comput. Chem., 7 (1983) 149-155. [Pg.304]

Calculation to determine qg kg found in soil and sediment test samples by average response factor analysis ... [Pg.1189]

For the detection of slow-acting biological agents (which may not produce symptoms for several days), the system response time would depend on the frequency of sampling and analysis. The frequency of sampling and analysis would be determined by factors such as the cost of the assay, the frequency with which critical reagents need to be replaced, the robustness of the detector, and so on. The minimum response time would be determined by the time required to collect a sample, prepare it for analysis, conduct the assay, and report the results. In the event of an alarm from a detector with a significant false-alarm rate, additional time would be required to determine its validity and to decide on an appropriate response. [Pg.16]

An important application field of factor and principal component analysis is environmental analysis. Einax and Danzer [1989] used FA to characterize the emission sources of airborne particulates which have been sampled in urban screening networks in two cities and one single place. The result of factor analysis basing on the contents of 16 elements (Al, B, Ba, Cr, Cu, Fe, Mg, Mn, Mo, Ni, Pb, Si, Sn, Ti, V, Zn) determined by Optical Atomic Emission Spectrography can be seen in Fig. 8.17. In Table 8.3 the common factors, their essential loadings, and the sources derived from them are given. [Pg.266]

In each of the aforementioned studies, qualitative IR spectroscopy was used. It is important to realize that IR is also quantitative in nature, and several quantitative IR assays for polymorphism have appeared in the literature. Sulfamethoxazole [35] exists in at least two polymorphic forms, which have been fully characterized. Distinctly different diffuse reflectance mid-IR spectra exist, permitting quantitation of one form within the other. When working with the diffuse reflectance IR technique, two critical factors must be kept in mind when developing a quantitative assay (1) the production of homogeneous calibration and validation samples, and (2) consistent particle size for all components, including subsequent samples for analysis. During the assay development for... [Pg.73]

A sample may be characterized by the determination of a number of different analytes. For example, a hydrocarbon mixture can be analysed by use of a series of UV absorption peaks. Alternatively, in a sediment sample a range of trace metals may be determined. Collectively, these data represent patterns characteristic of the samples, and similar samples will have similar patterns. Results may be compared by vectorial presentation of the variables, when the variables for similar samples will form clusters. Hence the term cluster analysis. Where only two variables are studied, clusters are readily recognized in a two-dimensional graphical presentation. For more complex systems with more variables, i.e. //, the clusters will be in -dimensional space. Principal component analysis (PCA) explores the interdependence of pairs of variables in order to reduce the number to certain principal components. A practical example could be drawn from the sediment analysis mentioned above. Trace metals are often attached to sediment particles by sorption on to the hydrous oxides of Al, Fe and Mn that are present. The Al content could be a principal component to which the other metal contents are related. Factor analysis is a more sophisticated form of principal component analysis. [Pg.22]

All reagents and solvents that are used to prepare the sample for analysis should be ultrapure to prevent contamination of the sample with impurities. Plastic ware should be avoided since these materials may contain ultratrace elements that can be leached into the analyte solutions. Chemically cleaned glassware is recommended for all sample preparation procedures. Liquid samples can be analyzed directly or after dilution when the concentrations are too high. Remember, all analytical errors are multiplied by dilution factors therefore, using atomic spectroscopy to determine high concentrations of elements may be less accurate than classical gravimetric methods. [Pg.247]

There are various other ways of examining the variate in question in this case. Let us first examine simple one-way ANOVA of the variate by sex as in Table 16.16. In neither of the two cases was there any indication of significant treatment differences at any reasonable level. Because the two sexes did not show any pretreatment differences based on the two-factor analysis of the covariate, let us combine the two sexes and analyze the data by one-way ANOVA as in Table 16.17. In this case, because of the increased sample sizes for combining the two sexes, there was indication of some treatment differences (p = 0.0454). Unfortunately, this analysis assumes that because there were no pretreatment differences between the two sexes, that pattern will hold during the posttreatment period. That often may not be the case because of biological reasons. [Pg.626]

On the other hand, factor analysis involves other manipulations of the eigen vectors and aims to gain insight into the structure of a multidimensional data set. The use of this technique was first proposed in biological structure-activity relationship (i. e., SAR) and illustrated with an analysis of the activities of 21 di-phenylaminopropanol derivatives in 11 biological tests [116-119, 289]. This method has been more commonly used to determine the intrinsic dimensionality of certain experimentally determined chemical properties which are the number of fundamental factors required to account for the variance. One of the best FA techniques is the Q-mode, which is based on grouping a multivariate data set based on the data structure defined by the similarity between samples [1, 313-316]. It is devoted exclusively to the interpretation of the inter-object relationships in a data set, rather than to the inter-variable (or covariance) relationships explored with R-mode factor analysis. The measure of similarity used is the cosine theta matrix, i. e., the matrix whose elements are the cosine of the angles between all sample pairs [1,313-316]. [Pg.269]

The thickness of pharmaceutical tablet coatings was predicted using target factor analysis (TFA) applied to Raman spectra collected with a 532-mn laser, where the samples were photobleached in a controlled manner before spectra were acquired. The authors acknowledge numerous issues that limit the direct applicability of this approach to process control. These include potential damage or alteration of the samples from photobleaching, laser wavelength selection, and data acquisition time. However, most of the issues raised relate to the hardware selected for a particular implementation and do not diminish the demonstration [286]. [Pg.230]

Multiway methods For analyzer data where a single sample generates a second order array (ex. GC/MS, LC/UV, excitation/emission fluorescence), multiway chemometric modehng methods, such as PARAFAC (parallel factor analysis) [121,122], can be used to exploit the second order advantage to perform effective calibration transfer and instrument standardization. [Pg.430]

After determining the underlying factors which affect local precipitation composition at an Individual site, an analysis of the slmlllarlty of factors between different sites can provide valuable Information about the regional character of precipitation and Its sources of variability over that spatial scale. SIMCA ( ) Is a classification method that performs principal component factor analysis for Individual classes (sites) and then classifies samples by calculating the distance from each sample to the PGA model that describes the precipitation character at each site. A score of percent samples which are correctly classified by the PGA models provides an Indication of the separability of the data by sites and, therefore, the uniqueness of the precipitation at a site as modeled by PGA. [Pg.37]

WA water quality labs by atomic absorption and autoanalyzer techniques. Charge balance calculations Indicated that all dissolved species of significance were analyzed. Comparison of filtered and unflltered aliquots suggested that un-lonlzed species were not present In appreciable quantities. Sampling and analysis uncertainties were determined by the operation of two co-located samplers for 16 weeks. The calcium and sulfate data were corrected for the Influence of sea salt to aid In the separation of the factors. This correction was calculated from bulk sea water composition and the chloride concentration In rainwater (11). Non seasalt sulfate and calcium are termed "excess" and flagged by a ... [Pg.38]

The basic model of the factor analysis method as applied here assumes that the x-ray emission Intensity of any specified element Is a linear sum of the quantity of that element found In the minerals present at that sample location ... [Pg.57]

A commonly used method of sampling and analysis for volatile organic compounds In ambient air Is by concentration of the compounds on a solid sorbent such as Tenax and subsequent thermal desorption and GC/MS analysis of the collected compounds. The analysis phase, although not trivial, can be done well If proper care Is taken. However, the sampling phase of this process apparently Introduces artifacts and unusual results due to, as yet, unknown factors. A method to detect some sampling problems has been proposed and tested (7 ). This distributed air volume method requires a set of samples of different air volumes to be collected at different flow rates over the same time period at the sampling location. Each pollutant concentration for the samples should be equal within experimental error since the same parcel of air Is sampled In each case. Differences In results for the same pollutant In the various samples Indicates sampling problems. [Pg.113]

Cluster analysis Is used to determine the particle types that occur in an aerosol. These types are used to classify the particles in samples collected from various locations and sampling periods. The results of the sample classifications, together with meteorological data and bulk analytical data from methods such as instrunental neutron activation analysis (INAA). are used to study emission patterns and to screen samples for further study. The classification results are used in factor analysis to characterize spatial and temporal structure and to aid in source attribution. The classification results are also used in mass balance comparisons between ASEM and bulk chemical analyses. Such comparisons allow the combined use of the detailed characterizations of the individual-particle analyses and the trace-element capability of bulk analytical methods. [Pg.119]


See other pages where Samples factor analysis is mentioned: [Pg.8]    [Pg.8]    [Pg.201]    [Pg.379]    [Pg.1756]    [Pg.854]    [Pg.531]    [Pg.958]    [Pg.87]    [Pg.88]    [Pg.615]    [Pg.922]    [Pg.45]    [Pg.35]    [Pg.7]    [Pg.56]    [Pg.334]    [Pg.338]    [Pg.305]    [Pg.145]    [Pg.146]    [Pg.146]    [Pg.417]    [Pg.725]    [Pg.222]    [Pg.225]    [Pg.258]    [Pg.224]    [Pg.534]    [Pg.34]    [Pg.53]    [Pg.118]   
See also in sourсe #XX -- [ Pg.60 ]




SEARCH



Boxes 5 Analysis of a sample dilution factor

Factor analysis

© 2024 chempedia.info