Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data-reduction techniques

Although a spectrum contains hundreds of data points, the dimensionality of the data is not generally as great as this. By dimensionality, we mean the number of inherent variables which comprise a data set. Take a mixture containing five components, say. We can vary the concentration of each component between certain limits, and record spectra of any combination we choose. Each spectrum contains, perhaps, a thousand data points. However, there can be no more than five genuine variables, as this is the number of components (ignoring any interactions between them in the mixture). The object of data reduction is to extract these five variables, which we can visualise at this stage as the pure-component spectra. [Pg.290]

Principal components analysis, factor analysis, singular value decomposition, etc. are all techniques used in data reduction. The aim of the overall process is to reduce the data set X into the product of two matrices, T and P, with residual error matrix E  [Pg.290]

The so-called factor loadings P are calculated from the matrix by singular value decomposition. T is called the scores matrix. The inner workings of this algorithm are not of interest here. Suffice to say that, if X has n variables, P will contain less than n linear combinations of variables which yet retain the information present in the original data. [Pg.290]

In calibration, where a matrix of dependent data, Y (e.g. concentration), is related to a matrix of independent data, X (e.g. absorbances), the requirement is to relate the scores of Y to the scores of X  [Pg.290]

Decomposition of data matrices into principal components has other advantages. Relationships within the set of variables, or within the set of samples, become clear. Scores may be depicted as points on a graph of orthogonal principal components, where clustering of samples reveals the [Pg.290]


In the past these tests were rather qualitative and one of the chief disadvantages was the lack of knowledge of the pressure transmitted to the acceptor. With the advent of calibration, however, the significance of gap tests was greatly increased. After discussing briefly the work on calibration done by various scientists between the years 1949 1965. Liddiard Price stated that the purpose of their work was to use the improved experimental and data reduction techniques developed in the few years prior to 1965 in order to obtain a calibration with the best data available. The report describes two test assemblies "NOL Standardized Gap Test [Fig 1(A)land "Modified Gap Test [Fig 1(B)]. The "Standardized Test , also known as "LSGT (Large Scale Gap Test), is described in Refs 48 54. For description of "NOL Modified Test , see Refs 59 68 Other modification developed at NOL is described in Ref 52... [Pg.326]

Helioseismic waves are detected by measuring the Doppler shift of lines in the solar spectrum due to vertical motion of the Sun s surface along the line of sight. With appropriate data-reduction techniques, the frequencies for global oscillations can be determined to an accuracy of 5 ppm. This extreme accuracy requires long-term, continuous observations that are best done by spacecraft such as the joint ESA/NASA SOHO spacecraft, which observes the Sun from the Lagrangian point where the Earth gravity balances that of the Sun. [Pg.94]

Multivariate calibration models are often built on an underdetermined data set, that is, more wavelengths than samples. The use of powerful data reduction techniques, such as PCR and PLS, makes assessing the model validity an extremely important aspect of the analysis procedure. Here, we present four important criteria on which to judge the validity of results from multivariate calibration. [Pg.340]

Empirical approaches are useful when macroscale HRR measurements are available but little or no information is available regarding the thermophysical properties, kinetic parameters, and heats of reaction that would be necessary to apply a more comprehensive pyrolysis model. Although these modeling approaches are crude in comparison with some of the more refined solid-phase treatments, one advantage is that all required input parameters can be obtained from widely used bench-scale fire tests using well-established data reduction techniques. As greater levels of complexity are added, establishing the required input parameters (or material properties ) for different materials becomes an onerous task. [Pg.565]

All experimental data from the literature were entered into a spreadsheet program in the units of the original source. Different sources of data using the same experimental technique were entered into copies of the appropriate spreadsheet to assure that the same data reduction technique was applied. [Pg.92]

The inherent difficulty in the measurement of the complex dynamic moduli of viscoelastic materials is emphasized by the results of this paper. The agreement among the shifted modulus data as measured by different systems is limited by several difficulties (1) measurement inaccuracies of the instruments, (2) differences in the data reduction techniques used to apply the time-temperature superposition principle and propagation of shift curve errors and, (3) nonuniformity of the test samples. [Pg.60]

In addition to providing the capability of using any one of four data reduction techniques, the computer has the advantage of storing the data on magnetic tape, where it is available to be... [Pg.349]

A large number of substituent descriptors have been reported in the literature. In order to use this information for substituent selection, appropriate statistical methods may be used. Pattern recognition or data reduction techniques, such as PCA or CA are good choices. As explained in Section III.B.3. in more detail, PCA consists of condensing the information in a data table into a few new descriptors made of linear combinations of the original ones. These new descriptors are called PCs or latent variables. This technique has been applied to define new descriptors for amino acids, as well as for aromatic or aliphatic substituents, which are called principal properties (PPs). These PPs can be used in FD methods or as variables in QSAR analysis. ... [Pg.505]

An iteration method for calculation of E and n using computer data reduction techniques was also described by Reich and Stivala (127) as well as various other algorithms (128-133, 137-139) and graphical methods to determine the reaction mechanism (134-136). [Pg.69]

Most of the commercial thermobalances now available use computer data reduction techniques to process the raw TG data (see Chapter 12). A dedicated microcomputer system plots the resultant data using a plotter or dot-matrix printer. Scaling and offset of the curve can be carried out as well as other mathematical operations such as differentiation, curve peak integration, and so on. [Pg.109]

Holmes, E., J. K. Nicholson, A. W. Nicholls, J. C. Lindon, S. C. Connor, S. Policy, and J. Connelly. 1998. The identification of novel biomarkers of renal toxicity using automatic data reduction techniques and PCA of proton NMR spectra of urine. Chemometrics and Intelligent Laboratory Systems 44 245-255... [Pg.98]

Principal components analysis is a well-established multivariate statistical technique that can be used to identify correlations within large data sets and to reduce the number of dimensions required to display the variation within the data. A new set of axes, principal components (PCs), are constructed, each of which accounts for the maximum variation not accounted for by previous principal components. Thus, a plot of the first two PCs displays the best two-dimensional representation of the total variance within the data. With pyrolysis mass spectra, principal components analysis is used essentially as a data reduction technique prior to performing canonical variates analysis, although information obtained from principal components plots can be used to identify atypical samples or outliers within the data and as a test for reproducibihty. [Pg.56]

After publication of a paper by Nicander et al. (1996) in which it was demonstrated that skin reactions elicited by three irritants of different polarity created three different histologic patterns and that each pattern could be correlated to corresponding patterns in the impedance indices, the Ollmar group has taken steps away from the data reduction technique based on the four indices to extract more information from the original impedance spectra. However, the indices are still useful for quantification of various aspects of responses to treatment or test substances, an example of which is given by Emtestam et al. (2007). [Pg.428]

Jeong,M. K., Lu, J.-C., Huo, X., Vidakovic, B. Chen, D. 2006. Wavelet-based data reduction techniques for process fault detection. Technometrics 48(1), 26—40. [Pg.823]

Equation 1 assumes that the shear stress at the interface is constant as a result of complete interfacial debonding. With good adhesion, only partial debonding or other micro-mechanical events such as transverse matrix cracking are observed, which invalidate the assumption of a constant interfacial shear stress. As a result, alternative data reduction techniques have been developed. For example, Tripathi and Jones developed the cumulative stress-transfer function, which deals with the limitations given above. This has been further refined by Lopattananon et al into the stress-transfer efficiency from which an ineffective length of that fibre in that resin can be determined. In this model, the matrix properties and frictional adhesion at debonds can be included in the analysis. It is also possible to use the three-phase stress-transfer model of Wu et al to include the properties of an interphase. [Pg.174]

PC loadings and their absolute values in the data reduction techniques has been proposed. This procedure has been successfully used for the smdy of the effect of carboxymethyl-P-cyclodextrin on the hydrophobicity parameters of steroidal drugs measured by TLC, and for the assessment of the binding characteristics of environmental pollutants to the wheat protein, gliadin, investigated by HPLC. [Pg.411]

Methods such as PCA (see Section 4.2) and factor analysis (FA) (see Section S.3) are data-reduction techniques which result in the creation of new variables from linear combinations of the original variables. These new variables have an important quality, orthogonality, which makes them particularly suitable for use in the construction of regression models. They are also sorted in order of importance, in so far as the amount of variance... [Pg.149]

DATA REDUCTION. Data reduction technique is more or less standard. A strip chart record, for instance, would be reduced by first determining the span that resulted from the secondary calibration. At any point of interest in the... [Pg.393]

The basic instrument is a neutron-sensitive pulse-counting system with automatic readout. Differences from similar systems used elsewhere appear to lie in data-reduction technique and in optlmizatioa of experimental conditions. [Pg.33]


See other pages where Data-reduction techniques is mentioned: [Pg.767]    [Pg.212]    [Pg.168]    [Pg.248]    [Pg.311]    [Pg.350]    [Pg.46]    [Pg.767]    [Pg.355]    [Pg.241]    [Pg.241]    [Pg.179]    [Pg.245]    [Pg.135]    [Pg.619]    [Pg.256]    [Pg.410]    [Pg.201]    [Pg.201]    [Pg.126]    [Pg.525]    [Pg.243]    [Pg.78]    [Pg.2249]    [Pg.639]    [Pg.256]    [Pg.1456]    [Pg.46]    [Pg.290]    [Pg.290]   


SEARCH



Data reduction

Data-reduction techniques spectroscopy

© 2024 chempedia.info