Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

PCA Decomposition

Mathematically, PCA relies upon eigenvector decomposition of the covariance or correlation matrix of the original process variables. For a given data matrix X with m rows (data points) and n columns (variables), the covariance or correlation matrix of A is defined as  [Pg.306]

If the columns of X have been autoscaled, i.e. the mean snbtracted from each column and divided by the standard deviation, then Eqn (22.2) is the correlation matrix of X. [Pg.306]

PCA now decomposes the data matrix X into the sum of the outer product of a so-called score vector and a so-called loading vector / , with a residual error E  [Pg.306]

Using singular value decomposition, the covariance matrix can be decomposed into [Pg.306]

Because the Pi vectors are the orthonotmal eigenvectors of the covariance matrix R, it can also be written that  [Pg.306]


Fig. 2 PCA decomposition of D data matrix following a bilinear model for a number of components N = 3... Fig. 2 PCA decomposition of D data matrix following a bilinear model for a number of components N = 3...
According to the magnitude of the retained variance and the contribution that the original variables make to each component, the environmental meaning of the identified components can be deduced, and the approximate level of error contained in the experimental data can also be determined. In this context, the displaying of scores (matrix X) and loadings (matrix YT) obtained from PCA decomposition of the original data matrix D are extremely useful. [Pg.341]

Martin et al. [102] reported a study in which LIBS was applied for the first time to wood-based materials where preservatives containing metals had to be determined. They applied PLS-1 block and PLS-2 block (because of the interdependence of the analytes) to multiplicative scattered-corrected data (a data pretreatment option of most use when diffuse radiation is employed to obtain spectra). They authors studied the loadings of a PCA decomposition to identify the main chemical features that grouped samples. Unfortunately, they did not extend the study to the PLS factors. However, they analysed the regression coefficients to determine the most important variables for some predictive models. [Pg.235]

WFA starts with the PCA decomposition of the D matrix, giving the product of scores and loadings, TPT. In general, the D matrix will have n components, i.e., rank n. The determination of the location of concentration windows for each component is carried out using EFA (see Figure 11.4b) or other methods. Steps 3 to 5 are the core of the WFA method and should be performed as many times as compounds are present in matrix D to recover the concentration profiles of the C matrix, one at a time. [Pg.428]

ITTFA starts calculating a PCA model of the original data matrix, D. There is a formal analogy between the PCA decomposition, i.e., D = TPT, and the CR decomposition, i.e., D = CST, of a data matrix. The scores matrix, T, and the loadings matrix, PT, span the same data space as the C and the ST matrices thus, their profiles can be described as abstract concentration profiles and abstract spectra, respectively. This means that any real concentration profile of C belongs to the score space and can be described as a linear combination of the abstract concentration profiles in the T matrix. [Pg.438]

PCR and PLR are useful when the matrix does not contain the full model representation. The first step of PCR is the decomposition of the data matrix into latent variables through PCA and the dependent variable is then regressed onto the decomposed independent variables. PLS performs, however, a simultaneous and interdependent PCA decomposition in a way that makes that PLS sometimes handles dependent variables better than does PCR. [Pg.169]

By relation, PLS is similar to PCR. Both decompose the A-data into a smaller set of variables, i.e., the scores. However, they differ in how they relate the scores to the Y-data. In PCR, the scores from the PCA decomposition of the A -data are regressed onto the Y-data. In contrast, PLS decomposes both the Y- and the A -data into individual score and loading matrices. The orthogonal sets of scores for the X- and Y-data, T, U, respectively, are generated in a way that maximizes their covariance. This is an attractive feature, particularly in situations where not all the major sources of variability in X are correlated to the variability in Y. PLS attempts to find a different set of orthogonal scores for the A -data to give better predictions of the Y-data. Thus, orthogonal vectors may yield a poorer representation of the A-data, while the scores may yield a better prediction of Y than would be possible with PCR. [Pg.36]

Fortunately, there are ways of determining the number of factors by looking at the igenvalues of the PCA factors. One fact that has been left out in all the discussions of PCA is that the data are not broken down into just two sets on values (scores and factors) but rather into three. The third set of values is the eigenvalues. Due to the way the PCA decomposition is calculated, the scores and factors generally span span a data range of + 1. If the scores and factors were the only representation of the data, all the principal component spectra would have the same relative intensities in the samples. Qearly, this is not the case some components will vary larger than others. [Pg.182]

Fig. 9. The calculated eigenvalues of a PCA decomposition of the spectral data in Fig. 4, Notice that the first eigenvalue is substantially larger than the rest, and that the values fall off rapidly at the higher numbered factors. Fig. 9. The calculated eigenvalues of a PCA decomposition of the spectral data in Fig. 4, Notice that the first eigenvalue is substantially larger than the rest, and that the values fall off rapidly at the higher numbered factors.
One common and robust ILS calibration method is principal components regression (PCR) [14,15]. The first step of PCR is factorization of R with the principal component analysis (PCA) decomposition where... [Pg.215]

PCA decomposition not only as data structure + noise but as pertinent information + other structured variation + noise . Thus, establishing the number of components corresponding to pertinent information , and hence establishing a PCA model, is useful. It has to be stressed that by retaining all components no data reduction is operated (except for the compression from / to the mathematical rank) and noise is not estimated only an orthogonal rotation of the variables space is accomplished. [Pg.88]

PCs constitutes the best low-dimensional approximation of the original data matrix, it was quite obvious to think of using the PCA scores as predictors in MLR to overcome the limitations of the method when dealing with ill-conditioned experimental matrices. Therefore, PCR modelling is a two-step process involving at first the calculation of the PCA decomposition of the predictor block and successively the build-up of the MLR model on the scores [14]. [Pg.152]


See other pages where PCA Decomposition is mentioned: [Pg.36]    [Pg.155]    [Pg.184]    [Pg.300]    [Pg.56]    [Pg.267]    [Pg.221]    [Pg.117]    [Pg.593]    [Pg.317]    [Pg.74]    [Pg.115]    [Pg.302]    [Pg.306]    [Pg.307]    [Pg.231]    [Pg.339]    [Pg.349]    [Pg.349]   


SEARCH



PCA

© 2024 chempedia.info