Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Factorial principal component analysis

Figure 3. First factorial plane of the principal component analysis showing the effluent coordinates in this plane. The inner graph shows the percentages of the eigen values of this analysis, corresponding to the part of the global variance for each factor. Figure 3. First factorial plane of the principal component analysis showing the effluent coordinates in this plane. The inner graph shows the percentages of the eigen values of this analysis, corresponding to the part of the global variance for each factor.
Factorial methods - factor analysis (FA) - principal components analysis ( PCA) - partial least squares modeling (PLS) - canonical correlation analysis Finding factors (causal complexes)... [Pg.7]

The vectors of means = (xi, I2,..., x ) and deviations = (ii, S2,. ..,Sp), and matrices of covariances S = (Sij) and correlations R = (tij) can be calculated. For this data matrix, the most used non-supervised methods are Principal Components Analysis (PCA), and/or Factorial Analysis (FA) in an attempt to reduce the dimensions of the data and study the interrelation between variables and observations, and Cluster Analysis (CA) to search for clusters of observations or variables (Krzanowski 1988 Cela 1994 Afifi and Clark 1996). Before applying these techniques, variables are usually first standardised (X, X ) to achieve a mean of 0 and unit variance. [Pg.694]

Figure 1. Principal Components Analysis (PCA) and Factorial Discriminating Analysis (FDA) ofRavensara aromatica Essential oils (26). Figure 1. Principal Components Analysis (PCA) and Factorial Discriminating Analysis (FDA) ofRavensara aromatica Essential oils (26).
In the literature, a large number of substituent descriptors have been reported. In order to use this information for substituent selection, appropriate statistical methods may be used. Pattern recognition or data reduction techniques, such as principal component analysis (PCA) or cluster analysis (CA) are good choices. As explained in Section V in more detail, PCA consists of condensing the information in a data table into a few new descriptors made of linear combinations of the original ones. These new descriptors are called principal components or latent variables. This technique has been applied to define new descriptors for amino acids, as well as for aromatic or aliphatic substituents, which are called principal properties (PPs). The principal properties can be used in factorial design methods or as variables in QSAR analysis. [Pg.357]

Nevertheless, data can be seen another way. If we consider the poles as metadescriptors - for the taste of water, Volvic could be a metalhc and bitteT descriptor, Evian tasteless and cool and Vittel salty and astringent - data can be encoded this way from 0 for totally different taste to 10 for same taste . We now consider the intensities of each sample on several descriptors, and classical factorial analyses such as Principal Component Analysis (PCA), Multiple Factorial Analysis (MFA), Statis or Generalized Procrustes Analysis (GPA) can be processed (Fig. 10.3). [Pg.218]

There have been many other methods put forth in the literature including K-nearest neighbor (Ref. 42), cluster analysis (Ref. 11), principal component analysis (PCA) factorial discriminant analysis (Refs. 43, 44), SIMCA (Ref. 36), and BEAST (Refs. 37-40). [Pg.170]

Downey et al. (14) used a statistical approach to classify commercial skim milk powders according to heat treatment. They used 66 samples of commercially produced skim milk powder including high-heat, medium-heat, and low-heat powders. Principal component analysis (PCA) was applied to the normalized spectral data, with the use of wavelengths as principal variables and class values as supplementary variables. Factorial discriminant analysis (FDA) was performed on the PC scores. Ten components were needed to correctly classify all samples in the calibration development set 91% of those in the evaluation set were correctly identified. Three samples of the medium-heat class were incorrectly classified, but the authors pointed out difficulties in the exact definition of the heat treatment classes, particularly the medium-heat class. [Pg.332]

First formulated by Pearson in 1901, PCA was outlined by Fisher and MacKenzie in 1923 and by H. Wold in 1966, who discovered the NIPALS algorithm (see Wold and references therein). PCA is also called factorial analysis (FA), single-value decomposition (SVD, which is the full PCA), or Karhunen-Loewe expansion (KLE). Data reduction by PCA is of key importance in CoMFA because it allows large amounts of data to be approximated by a small mathematical structure. In PCA, the X matrix of a given training set (e.g., a CoMFA field) is assumed to include a model and noise (the part of the data that cannot be explained by the model). The X matrix is thus a combination of the principal component model matrix M and the noise matrix E ... [Pg.151]


See other pages where Factorial principal component analysis is mentioned: [Pg.430]    [Pg.624]    [Pg.1442]    [Pg.624]    [Pg.157]    [Pg.359]    [Pg.46]    [Pg.535]    [Pg.396]    [Pg.747]    [Pg.356]    [Pg.262]    [Pg.249]    [Pg.153]    [Pg.43]    [Pg.17]   
See also in sourсe #XX -- [ Pg.141 ]




SEARCH



Component analysis

Factorial

Factorial analysis

Factories

Principal Component Analysis

Principal analysis

Principal component analysi

© 2024 chempedia.info