Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Principal component analysis orthogonal projection

CA is a statistical method which enables simple and rapid visualization of the rows and columns of two-way contingency tables as points in a low-dimensional space (Greenacre, 2007). Similarly to Principal Component Analysis, CA projects the data from the contingency table onto orthogonal dimensions that sequentially represent as much of the variation of the experimental data as possible (Abdi and Williams, 2010). The positions of the points corresponding to the rows and columns in the dimensions of the space are consistent with their associations in the contingency table. [Pg.234]

How does principal component analysis work Consider, for example, the two-dimensional distribution of points shown in Figure 7a. This distribution clearly has a strong linear component and is closer to a one-dimensional distribution than to a full two-dimensional distribution. However, from the one-dimensional projections of this distribution on the two orthogonal axes X and Y you would not know that. In fact, you would probably conclude, based only on these projections, that the data points are homogeneously distributed in two dimensions. A simple axes rotation is all it takes to reveal that the data points... [Pg.86]

The simplest and most widely used chemometric technique is Principal Component Analysis (PCA). Its objective is to accomplish orthogonal projection and in that process identify the minimum number of sensors yielding the maximum amount of information. It removes redundancies from the data and therefore can be called a true data reduction tool. In the PCA terminology, the eigenvectors have the meaning of Principal Components (PC) and the most influential values of the principal component are called primary components. Another term is the loading of a variable i with respect to a PQ. [Pg.321]

A second and orthogonally equivariant approach to robust PCA uses projection pursuit (PP) techniques. These methods maximize a robust measure of spread to obtain consecutive directions on which the data points are projected. In Hubert et al. [46], a projection pursuit (PP) algorithm is presented, based on the ideas of Li and Chen [47] and Croux and Ruiz-Gazen [48], The algorithm is called RAPCA, which stands for reflection algorithm for principal components analysis. [Pg.188]

The matrix (I — AAO projects on the orthogonal complement of the column space of A or stated otherwise (I — AAO produces the residuals after projection onto the column space of A. Hence, components have to be found such that, after projection of X(C B) on these components, the residual variation is minimal in a least squares sense. This is what principal component analysis does (see Chapter 3) and a solution is to take the first P left singular vectors of X(C B). The components found are automatically in the column space of X(C B), and, therefore, in the column space of X. [Pg.121]

Principal component analysis (PCA) is frequently the method of choice to compress and visualize the structure of multivariate data [13]. The original experimental data are compressed by representing the total data variance using only a few new variables, called principal components (PCs). These PCs, which are orthogonal to each other, are ranked in a descending order of the variance they model. This means that with PCA, samples are projected onto an optimal direction in the multivariate data space explaining the largest possible variance. As mentioned earlier, the variance of a projection is not robust and the presence of outliers in the data will affect the construction of PCs. A direct way to obtain robust principal components (RPCs) is to replace the classic variance estimator with its robust counterpart. [Pg.338]

The next step in the analysis is to determine whether there is systematic variation which was not accounted for by the first component and which could be described by a second component. The second component has a direction perpendicular to the first component and defines the direction through the swarm of points which describes the second next largest variation of the distribution of the data points. This constitutes a projection of the swarm of points to the plane spanned by the two first principal components. As the principal component vectors are orthogonal, they will portray different and independent principal properties. [Pg.36]

The next step in the analysis is to determine whether there is more systematic variation which is not described by the first principal component. A second component vector, p2, is determined so that P2 describes the direction through the swarm of points with the second largest variation. The direction of p2 is determined so that it is orthogonal to p, and anchored so that it passes the average point. It is then possible to project the data points on P2 and detennine the corresponding scores, /2- The result of this is that the swarm of points is projected down to the plane spanned by the principal components, and that the projection is made so that a maximum of the systematic variation is portrayed by the projected points., see Fig. 15.6. This means that as much as possible of the "shape" of the swarm of points is preserved in the projection. [Pg.347]


See other pages where Principal component analysis orthogonal projection is mentioned: [Pg.28]    [Pg.90]    [Pg.441]    [Pg.111]    [Pg.759]    [Pg.28]    [Pg.125]    [Pg.457]    [Pg.333]    [Pg.105]    [Pg.217]    [Pg.167]    [Pg.1040]    [Pg.121]    [Pg.386]    [Pg.293]    [Pg.453]    [Pg.258]   
See also in sourсe #XX -- [ Pg.65 ]




SEARCH



Component analysis

Principal Component Analysis

Principal analysis

Principal component analysi

© 2024 chempedia.info