Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Analysis principal components

Principal component analysis (PCA) is aimed at explaining the covariance structure of multivariate data through a reduction of the whole data set to a smaller number of independent variables. We assume that an m-point sample is represented by the nxm matrix X which collects i=l.m observations (measurements) xt of a column-vector x with j=, .n elements (e.g., the measurements of n=10 oxide weight percents in m = 50 rocks). Let x be the mean vector and Sx the nxn covariance matrix of this sample [Pg.237]

Given the eigencomponent decomposition of the symmetric covariance matrix [Pg.237]

The n coordinates associated with the n eigenvectors define the vector e, of the point vector x( in the new system of coordinates, which is written formally as [Pg.237]

Applying equation (4.3.4) for variable change, we get the component covariance matrix Se as [Pg.237]

Since A is diagonal, the covariance of the components along the jth eigenvector is [Pg.237]

The scope of Principal Component Analysis (PCA) is a consistent portrayal of a data set in a representation space. Mathematically, PCA is a linear transformation that may be described as S=WX. Here X is the original data set, W is the transformation matrix, and S are the data in the representation space. PCA is the simplest and most widely used method of multivariate analysis. Nonetheless, most users are seldom aware of its assumptions and sometimes results are badly interpreted. [Pg.154]

The peculiarity of PCA is in a representation of the data set onto a subspace of reduced dimensionality where the statistical properties of the original data set are preserved. [Pg.154]

Although the PCA concept is used in many disciplines it was strongly developed in chemometrics, where it was introduced at the beginning to analyze spectroscopic and chromatographic data, which are characterized by a higher correlation among the spectra channels [8]. [Pg.154]

The possibility of a reliable representation of a chemical sensor array data set in subspaces of smaller dimension lies in the fact that the individual sensors always exhibit a high correlation among themselves. PCA consists of finding an orthogonal basis where the correlation among sensors disappears. [Pg.154]

As a consequence, in a sensor space of dimension N the effective dimension of the sub-space occupied by the data is less than N. This dimension can be precisely evaluated using algorithms developed to describe dynamic systems. An example is the correlation distance that allows evaluating the fractional dimensionality of a data-set [16]. Correlation distance provides an independent way to evaluate the expected reduction of dimension. [Pg.154]

From a statistical perspective, principal component analysis (PCA) is a method for reducing the dimensionality of data sets by transforming correlated variables into a smaller set of uncorrelated variables andfinding linear combinations of the original variables with maximum variability. [Pg.305]

As chemical processes become increasingly instrumented and data is recorded more frequently, both data compression and extraction become important. However, the information should be compressed in such a way that the essential features of the data are safeguarded and the new data is more easily displayed than the original data. [Pg.305]

PCA is a tool, initially used by chemometricians for data compression and extraction of analytical instruments. Nowadays, it is a tool that is also frequently used to analyze data from process plants, where process data is often correlated. PCA offers the advantage that it finds a set of new uncorrelated variables that are a linear combination of the original variables. The new variables describe the maximum variance in the original data set in ascending order. [Pg.305]

Principal component analysis (PCA) can be perfonned on data matrices, consisting of raw data. Analysis of raw data can be used to estimate the importance and characteristics of the individual variables. The data matrix for n variables measured by m data points is given by  [Pg.305]

PCA entails transforming n original variables xj, X2,. .., x into an ordered sequence of n [Pg.305]

The simplest and most widely used chemometric technique is Principal Component Analysis (PCA). Its objective is to accomplish orthogonal projection and in that process identify the minimum number of sensors yielding the maximum amount of information. It removes redundancies from the data and therefore can be called a true data reduction tool. In the PCA terminology, the eigenvectors have the meaning of Principal Components (PC) and the most influential values of the principal component are called primary components. Another term is the loading of a variable i with respect to a PQ. [Pg.321]

High loading of PCf indicates that this vector is aligned close to the original data values that is, the transformation to the feature space defined by the new principal components matches the important (perhaps the most important) trend in the raw data. Conversely, low loading means that the PC does not match a significant trend in the data. Typically, 2-3 PCs can characterize most experimental datasets. This then allows a 2-D or a 3-D graphical representation of the results, as shown in Fig. 10.6. As powerful as it is, the PCA fails in cases where the individual sensors in the [Pg.321]

The eigenvectors and eigenvalues of C are then calculated. Calling V the eigenvector matrix and D the diagonal matrix of the eigenvalues, the composition of the principal components is given as a matrix, F, of linear combination coefficients that mix the [Pg.223]

Eigenvalues that are significantly different from zero identify the relevant components the percent contribution of each principal component k to the effective variance of the sample can be determined as follows  [Pg.224]

Both in bivariate and multivariate analysis, correlations between molecular and crystal properties are poor, unsure, and scarcely robust. The fundamental reason is that crystal packing is a matter of finely adjusted detail, and global indicators will never reach the subtleness required for a real predictive power. [Pg.224]

Both CLS and ILS have strengths and weaknesses, and it would be beneficial to combine their strengths and eliminate the weaknesses. A commonly accepted procedure to achieve this goal is to use spectral decomposition in the form of principal component analysis (PCA). [Pg.213]

The advantages of CLS and ILS can be combined if we can reduce the number of wavenumbers at which absorbances are measured for ILS, yet retain the advantages of the use of the entire spectrum as in CLS regression. To do this we must consider what a collection of calibration spectra represents. First, we have the pure components. The calibration spectra are composed of different concentrations of die pure components, or for the spectra, different scalings. Unfortunately, there is more that we must consider. If the components interact, we have interaction spectra, and these will be present in different ratios in the individual calibration mixture spectra. Each instrument will bias the spectra, so we have instrument-induced artifacts that may vary with temperamre, total absorbance, wavenumber, and so on. In all we have numerous contributions or factors that together comprise each of the individual calibration spectra. The only difference between all the individual calibration spectra is the proportion of the contributions. [Pg.213]

If we had an ideal spectrometer and absolutely no interaction effects, we would be able to construct the individual calibration spectra from the spectra of the pure components alone. More important, we should be able to decompose the individual calibration spectra into the pure component spectra. Because we do not have ideal spectrometers and rarely have ideal calibration sets, this is not possible. Nonetheless, if it were possible, we could simply represent each spectram in the calibration set as the proportions of the pure component spectra. For example, if there were three components, each calibration spectrum would be composed of the three components in different proportions. Three numbers would represent the proportions, one for each component. [Pg.213]

The problem we have with ILS is that we cannot use as many wavenumbers as we can in CLS. In CLS, this was a distinct advantage, as the use of all the absorbances led to a more accmate prediction. If in the ILS model we use the proportions, or scores, in place of the absorbances, we have restricted the dependent variables to a small number as required in ILS we also have a score for each component, also as required in ILS. On the other hand, the scores represent the entire absorbance spectram and by default we have incorporated all the absorbance information and its associated advantage. [Pg.213]

The problem lies in how we can extract pure components from a real data set. In reality such an operation is difficult, although several algorithms exist that can do precisely this. These algorithms are beyond the scope of this treatise, and recovery of the pure components spectra is neither necessary nor often desirable. It is more important to decompose a set of calibration spectra into significant factors. These significant factors are also called principal components, eigenvectors, loadings. [Pg.213]

In addition, since we have more than one variable, it is possible to calculate a product-moment (Pearson) correlation coefficient for each pair of variables. These are summarized in the correlation matrix in Table 8.2, obtained using Minitab. [Pg.215]

One problem with multivariate data is that its sheer volume may make it difficult to see patterns and relationships. For example, a spectrum would normally be characterized by several hundred intensity measurements rather than just four as in Table 8.1 and in this case the correlation matrix would contain hundreds of values. Thus the aim of many methods of multivariate analysis is data reduction. Quite frequently there is some correlation between the variables, as there is for the data in Table 8.1, and so some of the information is redundant. Principal component analysis (PCA) is a technique for reducing the amount of data when there is correlation present. It is worth stressing that it is not a useful technique if the variables are uncorrelated. [Pg.215]

The idea behind PCA is to iind principal components Zi, Z2. Z which are linear combinations of the originai variables describing each specimen, X, X2. X , i.e. [Pg.216]

The principal components are obtained from the covariance matrix. The term covariance (see Section 5.3) is a measure of the joint variance of two variables. The covariance matrix for the data in Table 8.1 is [Pg.217]

Carry out a principal component analysis of the data in Table 8.1. [Pg.217]

The samples are grouped so that those with the most similar odour character are closest together on the map (e.g. Lilial and Bourgeonal ), and those that are most different are furthest apart (e.g, cyclamen aldehyde and Lyral ). The arrows indicate the direction of increasing perception of the odour characteristics shown. Only the odour characteristics that significantly correlate with the distribution of the samples across the map are shown these are the characteristics that are responsible for the systematic differences between the samples. [Pg.159]

All the materials were perceived to be floral and muguet in character, so these characteristics are not shown on the map. Cyclamen aldehyde, Lilial and Bourgeonal are the most fruity of the samples and are grouped together on the right-hand side of the map, while Lyral and hydroxycitronellal are the sweetest materials and Mayol is the most herbal. [Pg.159]

This involves describing linear interrelationships on the basis of correlation calculations and co-variance calculations. Groups are formed such that within each group the variables belonging thereto are highly correlated, but between the groups of individual variables there is no linear interrelationship. [Pg.720]

The diagram depicts the correlation of variables with different principal components (factors), expressed as principal component load. As has already [Pg.721]


The essential degrees of freedom are found by a principal component analysis of the position correlation matrix Cy of the cartesian coordinate displacements Xi with respect to their averages xi), as gathered during a long MD run ... [Pg.22]

The important underlying components of protein motion during a simulation can be extracted by a Principal Component Analysis (PGA). It stands for a diagonalization of the variance-covariance matrix R of the mass-weighted internal displacements during a molecular dynamics simulation. [Pg.73]

Grubmiiller described a method to induce conformational transitions in proteins and derived rate constants for these ([Grubmiiller 1994]). The method employs subsequent modifications of the original potential function based on a principal component analysis of a short MD simulation. It is discussed in more detail in the chapter of Eichinger et al. in this volume. [Pg.74]

Hayward et al. 1994] Hayward, S., Kitao, A., Go, N. Harmonic and anharmonic aspects in the dynamics of BPTI A normal mode analysis and principal component analysis. Prot. Sci. 3 (1994) 936-943 [Head-Gordon and Brooks 1991] Head-Gordon, T., Brooks, C.L. Virtual rigid body dynamics. Biopol. 31 (1991) 77-100... [Pg.76]

Step 2 This ensemble is subjected to a principal component analysis (PCA) [61] by diagonalizing the covariance matrix C G x 7Z, ... [Pg.91]

Steven Hayward, Akio Kitao, and Nobuhiro Go. Harmonic and anharmonic aspects in the dynamics of BPTI A normal mode analysis and principal component analysis. Physica Scripta, 3 936-943, 1994. [Pg.97]

M. A. Balsera, W. Wriggers, Y. Oono, and K. Schulten. Principal component analysis and long time protein dynamics. J. Phys. Chem., 100 2567-2572, 1996. [Pg.262]

We have to apply projection techniques which allow us to plot the hyperspaces onto two- or three-dimensional space. Principal Component Analysis (PCA) is a method that is fit for performing this task it is described in Section 9.4.4. PCA operates with latent variables, which are linear combinations of the original variables. [Pg.213]

To gain insight into chemometric methods such as correlation analysis, Multiple Linear Regression Analysis, Principal Component Analysis, Principal Component Regression, and Partial Least Squares regression/Projection to Latent Structures... [Pg.439]

Kohonen network Conceptual clustering Principal Component Analysis (PCA) Decision trees Partial Least Squares (PLS) Multiple Linear Regression (MLR) Counter-propagation networks Back-propagation networks Genetic algorithms (GA)... [Pg.442]

PCR is a combination of PCA and MLR, which are described in Sections 9.4.4 and 9.4.3 respectively. First, a principal component analysis is carried out which yields a loading matrix P and a scores matrix T as described in Section 9.4.4. For the ensuing MLR only PCA scores are used for modeling Y The PCA scores are inherently imcorrelated, so they can be employed directly for MLR. A more detailed description of PCR is given in Ref. [5. ... [Pg.448]

I Principal Component Analysis (PCA) transforms a number of correlated variables into a smaller number of uncorrelated variables, the so-called principal components. [Pg.481]

The previously mentioned data set with a total of 115 compounds has already been studied by other statistical methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis, and the Partial Least Squares (PLS) method [39]. Thus, the choice and selection of descriptors has already been accomplished. [Pg.508]

Spectral features and their corresponding molecular descriptors are then applied to mathematical techniques of multivariate data analysis, such as principal component analysis (PCA) for exploratory data analysis or multivariate classification for the development of spectral classifiers [84-87]. Principal component analysis results in a scatter plot that exhibits spectra-structure relationships by clustering similarities in spectral and/or structural features [88, 89]. [Pg.534]

The dimensionality of a data set is the number of variables that are used to describe eac object. For example, a conformation of a cyclohexane ring might be described in terms c the six torsion angles in the ring. However, it is often found that there are significai correlations between these variables. Under such circumstances, a cluster analysis is ofte facilitated by reducing the dimensionality of a data set to eliminate these correlation Principal components analysis (PCA) is a commonly used method for reducing the dimensior ality of a data set. [Pg.513]

An alternative to principal components analysis is factor analysis. This is a technique which can identify multicollinearities in the set - these are descriptors which are correlated with a linear combination of two or more other descriptors. Factor analysis is related to (and... [Pg.697]

Multiple linear regression is strictly a parametric supervised learning technique. A parametric technique is one which assumes that the variables conform to some distribution (often the Gaussian distribution) the properties of the distribution are assumed in the underlying statistical method. A non-parametric technique does not rely upon the assumption of any particular distribution. A supervised learning method is one which uses information about the dependent variable to derive the model. An unsupervised learning method does not. Thus cluster analysis, principal components analysis and factor analysis are all examples of unsupervised learning techniques. [Pg.719]

The field points must then be fitted to predict the activity. There are generally far more field points than known compound activities to be fitted. The least-squares algorithms used in QSAR studies do not function for such an underdetermined system. A partial least squares (PLS) algorithm is used for this type of fitting. This method starts with matrices of field data and activity data. These matrices are then used to derive two new matrices containing a description of the system and the residual noise in the data. Earlier studies used a similar technique, called principal component analysis (PCA). PLS is generally considered to be superior. [Pg.248]

Other chemometrics methods to improve caUbration have been advanced. The method of partial least squares has been usehil in multicomponent cahbration (48—51). In this approach the concentrations are related to latent variables in the block of observed instmment responses. Thus PLS regression can solve the colinearity problem and provide all of the advantages discussed earlier. Principal components analysis coupled with multiple regression, often called Principal Component Regression (PCR), is another cahbration approach that has been compared and contrasted to PLS (52—54). Cahbration problems can also be approached using the Kalman filter as discussed (43). [Pg.429]

A method of resolution that makes a very few a priori assumptions is based on principal components analysis. The various forms of this approach are based on the self-modeling curve resolution developed in 1971 (55). The method requites a data matrix comprised of spectroscopic scans obtained from a two-component system in which the concentrations of the components are varying over the sample set. Such a data matrix could be obtained, for example, from a chromatographic analysis where spectroscopic scans are obtained at several points in time as an overlapped peak elutes from the column. [Pg.429]

Principal component analysis has been used in combination with spectroscopy in other types of multicomponent analyses. For example, compatible and incompatible blends of polyphenzlene oxides and polystyrene were distinguished using Fourier-transform-infrared spectra (59). Raman spectra of sulfuric acid/water mixtures were used in conjunction with principal component analysis to identify different ions, compositions, and hydrates (60). The identity and number of species present in binary and tertiary mixtures of polycycHc aromatic hydrocarbons were deterrnined using fluorescence spectra (61). [Pg.429]

How does principal component analysis work Consider, for example, the two-dimensional distribution of points shown in Figure 7a. This distribution clearly has a strong linear component and is closer to a one-dimensional distribution than to a full two-dimensional distribution. However, from the one-dimensional projections of this distribution on the two orthogonal axes X and Y you would not know that. In fact, you would probably conclude, based only on these projections, that the data points are homogeneously distributed in two dimensions. A simple axes rotation is all it takes to reveal that the data points... [Pg.86]


See other pages where Analysis principal components is mentioned: [Pg.357]    [Pg.359]    [Pg.364]    [Pg.2967]    [Pg.61]    [Pg.446]    [Pg.11]    [Pg.36]    [Pg.513]    [Pg.514]    [Pg.515]    [Pg.697]    [Pg.698]    [Pg.722]    [Pg.723]    [Pg.370]    [Pg.421]    [Pg.426]    [Pg.426]    [Pg.429]    [Pg.430]    [Pg.190]    [Pg.19]    [Pg.86]    [Pg.86]   
See also in sourсe #XX -- [ Pg.213 ]

See also in sourсe #XX -- [ Pg.86 , Pg.87 , Pg.88 ]

See also in sourсe #XX -- [ Pg.445 ]

See also in sourсe #XX -- [ Pg.133 , Pg.303 ]

See also in sourсe #XX -- [ Pg.94 ]

See also in sourсe #XX -- [ Pg.62 ]

See also in sourсe #XX -- [ Pg.239 , Pg.327 ]

See also in sourсe #XX -- [ Pg.140 , Pg.152 , Pg.160 , Pg.187 , Pg.229 , Pg.231 , Pg.239 ]

See also in sourсe #XX -- [ Pg.205 ]

See also in sourсe #XX -- [ Pg.331 , Pg.339 ]

See also in sourсe #XX -- [ Pg.62 ]

See also in sourсe #XX -- [ Pg.31 , Pg.53 ]

See also in sourсe #XX -- [ Pg.135 ]

See also in sourсe #XX -- [ Pg.147 ]

See also in sourсe #XX -- [ Pg.166 , Pg.821 ]

See also in sourсe #XX -- [ Pg.423 , Pg.427 , Pg.432 ]

See also in sourсe #XX -- [ Pg.59 ]

See also in sourсe #XX -- [ Pg.75 , Pg.119 , Pg.307 , Pg.344 ]

See also in sourсe #XX -- [ Pg.224 , Pg.254 , Pg.281 , Pg.295 , Pg.362 , Pg.441 , Pg.451 , Pg.470 , Pg.504 ]

See also in sourсe #XX -- [ Pg.130 ]

See also in sourсe #XX -- [ Pg.2 ]

See also in sourсe #XX -- [ Pg.39 , Pg.115 , Pg.172 , Pg.173 , Pg.283 , Pg.284 , Pg.413 ]

See also in sourсe #XX -- [ Pg.79 ]

See also in sourсe #XX -- [ Pg.53 ]

See also in sourсe #XX -- [ Pg.110 ]

See also in sourсe #XX -- [ Pg.78 ]

See also in sourсe #XX -- [ Pg.37 , Pg.86 ]

See also in sourсe #XX -- [ Pg.369 ]

See also in sourсe #XX -- [ Pg.188 , Pg.197 , Pg.386 , Pg.394 , Pg.399 ]

See also in sourсe #XX -- [ Pg.205 ]

See also in sourсe #XX -- [ Pg.199 , Pg.204 , Pg.206 , Pg.262 , Pg.289 , Pg.291 , Pg.293 , Pg.294 , Pg.295 , Pg.296 , Pg.297 , Pg.298 , Pg.299 , Pg.300 , Pg.308 , Pg.333 , Pg.341 , Pg.348 , Pg.353 , Pg.369 , Pg.376 , Pg.381 , Pg.411 ]

See also in sourсe #XX -- [ Pg.210 ]

See also in sourсe #XX -- [ Pg.63 ]

See also in sourсe #XX -- [ Pg.219 ]

See also in sourсe #XX -- [ Pg.742 ]

See also in sourсe #XX -- [ Pg.562 , Pg.563 , Pg.564 ]

See also in sourсe #XX -- [ Pg.345 ]

See also in sourсe #XX -- [ Pg.373 , Pg.382 ]

See also in sourсe #XX -- [ Pg.124 ]

See also in sourсe #XX -- [ Pg.136 ]

See also in sourсe #XX -- [ Pg.124 , Pg.125 , Pg.126 , Pg.127 , Pg.128 ]

See also in sourсe #XX -- [ Pg.384 ]

See also in sourсe #XX -- [ Pg.82 ]

See also in sourсe #XX -- [ Pg.167 ]

See also in sourсe #XX -- [ Pg.255 ]

See also in sourсe #XX -- [ Pg.99 , Pg.100 , Pg.134 , Pg.137 , Pg.158 ]

See also in sourсe #XX -- [ Pg.39 ]

See also in sourсe #XX -- [ Pg.204 ]

See also in sourсe #XX -- [ Pg.149 ]

See also in sourсe #XX -- [ Pg.90 ]

See also in sourсe #XX -- [ Pg.45 , Pg.67 ]

See also in sourсe #XX -- [ Pg.509 ]

See also in sourсe #XX -- [ Pg.285 , Pg.290 ]

See also in sourсe #XX -- [ Pg.297 , Pg.321 , Pg.325 , Pg.325 , Pg.330 , Pg.330 , Pg.335 , Pg.335 , Pg.336 , Pg.336 , Pg.339 , Pg.339 , Pg.342 , Pg.342 , Pg.420 , Pg.422 ]

See also in sourсe #XX -- [ Pg.181 ]

See also in sourсe #XX -- [ Pg.55 ]

See also in sourсe #XX -- [ Pg.325 , Pg.334 ]

See also in sourсe #XX -- [ Pg.57 , Pg.59 , Pg.160 ]

See also in sourсe #XX -- [ Pg.228 ]

See also in sourсe #XX -- [ Pg.140 , Pg.152 , Pg.160 , Pg.187 , Pg.229 , Pg.231 , Pg.239 ]

See also in sourсe #XX -- [ Pg.196 , Pg.199 ]

See also in sourсe #XX -- [ Pg.405 , Pg.415 ]

See also in sourсe #XX -- [ Pg.47 ]

See also in sourсe #XX -- [ Pg.592 ]

See also in sourсe #XX -- [ Pg.150 ]

See also in sourсe #XX -- [ Pg.69 , Pg.87 , Pg.180 ]

See also in sourсe #XX -- [ Pg.87 , Pg.88 , Pg.89 , Pg.90 , Pg.91 , Pg.92 ]

See also in sourсe #XX -- [ Pg.99 , Pg.100 , Pg.134 , Pg.137 , Pg.158 ]

See also in sourсe #XX -- [ Pg.28 , Pg.42 , Pg.215 , Pg.221 , Pg.286 , Pg.287 , Pg.292 ]

See also in sourсe #XX -- [ Pg.152 , Pg.251 ]

See also in sourсe #XX -- [ Pg.311 ]

See also in sourсe #XX -- [ Pg.103 ]

See also in sourсe #XX -- [ Pg.98 ]

See also in sourсe #XX -- [ Pg.169 , Pg.289 , Pg.323 ]

See also in sourсe #XX -- [ Pg.285 , Pg.290 ]

See also in sourсe #XX -- [ Pg.742 ]

See also in sourсe #XX -- [ Pg.257 , Pg.329 ]

See also in sourсe #XX -- [ Pg.548 , Pg.594 , Pg.595 , Pg.596 , Pg.600 , Pg.652 ]

See also in sourсe #XX -- [ Pg.158 , Pg.278 ]

See also in sourсe #XX -- [ Pg.306 , Pg.364 ]

See also in sourсe #XX -- [ Pg.261 ]

See also in sourсe #XX -- [ Pg.136 ]

See also in sourсe #XX -- [ Pg.333 , Pg.501 ]

See also in sourсe #XX -- [ Pg.20 ]

See also in sourсe #XX -- [ Pg.1139 ]

See also in sourсe #XX -- [ Pg.166 , Pg.297 ]

See also in sourсe #XX -- [ Pg.4 , Pg.132 , Pg.163 ]

See also in sourсe #XX -- [ Pg.2 , Pg.233 , Pg.235 , Pg.236 ]

See also in sourсe #XX -- [ Pg.76 , Pg.235 , Pg.362 ]

See also in sourсe #XX -- [ Pg.233 ]

See also in sourсe #XX -- [ Pg.99 , Pg.238 ]

See also in sourсe #XX -- [ Pg.3 , Pg.42 , Pg.180 ]

See also in sourсe #XX -- [ Pg.6 , Pg.22 , Pg.26 , Pg.85 , Pg.101 , Pg.137 ]

See also in sourсe #XX -- [ Pg.225 , Pg.331 ]

See also in sourсe #XX -- [ Pg.103 , Pg.118 , Pg.121 ]

See also in sourсe #XX -- [ Pg.45 , Pg.47 , Pg.61 , Pg.64 ]

See also in sourсe #XX -- [ Pg.141 , Pg.153 , Pg.165 , Pg.167 ]

See also in sourсe #XX -- [ Pg.592 ]

See also in sourсe #XX -- [ Pg.53 ]

See also in sourсe #XX -- [ Pg.283 , Pg.290 , Pg.312 , Pg.316 , Pg.317 , Pg.320 ]

See also in sourсe #XX -- [ Pg.2 , Pg.452 ]

See also in sourсe #XX -- [ Pg.9 , Pg.253 ]

See also in sourсe #XX -- [ Pg.706 ]

See also in sourсe #XX -- [ Pg.136 , Pg.486 ]

See also in sourсe #XX -- [ Pg.56 ]

See also in sourсe #XX -- [ Pg.289 , Pg.291 ]

See also in sourсe #XX -- [ Pg.6 , Pg.195 , Pg.222 , Pg.224 ]

See also in sourсe #XX -- [ Pg.375 , Pg.376 , Pg.429 ]

See also in sourсe #XX -- [ Pg.213 ]

See also in sourсe #XX -- [ Pg.77 , Pg.81 , Pg.201 , Pg.202 , Pg.206 ]

See also in sourсe #XX -- [ Pg.712 ]

See also in sourсe #XX -- [ Pg.720 ]

See also in sourсe #XX -- [ Pg.241 , Pg.316 ]

See also in sourсe #XX -- [ Pg.126 , Pg.192 , Pg.199 , Pg.234 , Pg.256 , Pg.366 , Pg.379 , Pg.433 ]

See also in sourсe #XX -- [ Pg.318 , Pg.328 , Pg.330 , Pg.331 ]

See also in sourсe #XX -- [ Pg.421 ]

See also in sourсe #XX -- [ Pg.176 , Pg.177 ]

See also in sourсe #XX -- [ Pg.222 ]

See also in sourсe #XX -- [ Pg.222 ]

See also in sourсe #XX -- [ Pg.126 , Pg.192 , Pg.199 , Pg.234 , Pg.256 , Pg.366 , Pg.379 , Pg.433 ]

See also in sourсe #XX -- [ Pg.5 , Pg.143 , Pg.147 , Pg.162 , Pg.163 , Pg.167 , Pg.232 , Pg.233 , Pg.235 , Pg.239 ]

See also in sourсe #XX -- [ Pg.120 ]

See also in sourсe #XX -- [ Pg.253 ]

See also in sourсe #XX -- [ Pg.227 , Pg.238 ]

See also in sourсe #XX -- [ Pg.71 , Pg.72 , Pg.73 ]

See also in sourсe #XX -- [ Pg.646 ]

See also in sourсe #XX -- [ Pg.192 ]

See also in sourсe #XX -- [ Pg.45 , Pg.67 ]

See also in sourсe #XX -- [ Pg.268 , Pg.387 , Pg.389 , Pg.392 , Pg.393 ]

See also in sourсe #XX -- [ Pg.53 , Pg.54 , Pg.55 , Pg.56 ]

See also in sourсe #XX -- [ Pg.504 ]

See also in sourсe #XX -- [ Pg.49 , Pg.50 , Pg.51 , Pg.52 , Pg.53 , Pg.54 , Pg.55 , Pg.56 , Pg.57 , Pg.58 , Pg.59 , Pg.60 , Pg.61 , Pg.62 , Pg.65 , Pg.71 ]

See also in sourсe #XX -- [ Pg.90 , Pg.92 , Pg.182 ]

See also in sourсe #XX -- [ Pg.36 , Pg.88 , Pg.214 , Pg.217 ]

See also in sourсe #XX -- [ Pg.128 , Pg.581 ]

See also in sourсe #XX -- [ Pg.436 , Pg.437 ]

See also in sourсe #XX -- [ Pg.252 , Pg.310 , Pg.347 ]

See also in sourсe #XX -- [ Pg.258 , Pg.261 ]

See also in sourсe #XX -- [ Pg.518 ]

See also in sourсe #XX -- [ Pg.251 ]

See also in sourсe #XX -- [ Pg.20 , Pg.123 ]

See also in sourсe #XX -- [ Pg.336 ]

See also in sourсe #XX -- [ Pg.64 ]

See also in sourсe #XX -- [ Pg.102 , Pg.120 , Pg.157 , Pg.158 , Pg.159 , Pg.160 , Pg.248 , Pg.250 , Pg.262 , Pg.301 ]

See also in sourсe #XX -- [ Pg.131 ]

See also in sourсe #XX -- [ Pg.61 , Pg.66 , Pg.67 , Pg.68 , Pg.69 , Pg.130 , Pg.190 , Pg.301 , Pg.307 , Pg.316 , Pg.318 , Pg.324 ]

See also in sourсe #XX -- [ Pg.213 , Pg.214 , Pg.216 ]

See also in sourсe #XX -- [ Pg.106 ]

See also in sourсe #XX -- [ Pg.223 ]

See also in sourсe #XX -- [ Pg.5 , Pg.25 , Pg.163 , Pg.351 , Pg.370 , Pg.418 , Pg.748 , Pg.3031 , Pg.3220 ]

See also in sourсe #XX -- [ Pg.75 , Pg.78 ]

See also in sourсe #XX -- [ Pg.64 ]

See also in sourсe #XX -- [ Pg.70 , Pg.76 , Pg.82 , Pg.134 , Pg.141 , Pg.143 , Pg.144 , Pg.160 , Pg.161 , Pg.169 , Pg.170 , Pg.171 , Pg.172 , Pg.173 , Pg.174 , Pg.175 , Pg.194 ]

See also in sourсe #XX -- [ Pg.48 , Pg.49 ]

See also in sourсe #XX -- [ Pg.32 , Pg.33 , Pg.112 ]

See also in sourсe #XX -- [ Pg.227 , Pg.247 ]

See also in sourсe #XX -- [ Pg.280 ]

See also in sourсe #XX -- [ Pg.353 , Pg.354 , Pg.373 ]

See also in sourсe #XX -- [ Pg.26 ]

See also in sourсe #XX -- [ Pg.283 , Pg.301 , Pg.348 ]

See also in sourсe #XX -- [ Pg.1343 ]

See also in sourсe #XX -- [ Pg.412 , Pg.413 ]

See also in sourсe #XX -- [ Pg.225 , Pg.226 , Pg.227 , Pg.228 , Pg.229 , Pg.230 , Pg.231 , Pg.232 , Pg.233 , Pg.234 , Pg.235 , Pg.236 , Pg.237 , Pg.238 , Pg.239 , Pg.240 , Pg.323 ]




SEARCH



Absorption band Principal component analysis

Algorithms principal components analysis

Amino principal component analysis

Analytical methods principal component analysis

Capillary electrophoresis principal component analysis

Cheeses principal component analysis

Chemometrical principal component analysis

Chemometrics principal component analysis

Chromatography principal components analysis applications

Component analysis

Consensus principal component analysis CPCA)

Data Analysis Principal components

Data augmentation, multivariate curve principal component analysis

Description of Principal Components Analysis

Eigenvalues, principal component analysis

Evolving principal components analysis

Example of principal component analysis

Experimental data modeling principal component analysis

Experimental design principal components analysis

Factor analysis principal components

Factorial principal component analysis

Functional principal component analysis

Fuzzy Principal Component Analysis (FPCA)

Fuzzy principal component analysis

GRID principal component analysis

Genetic analyses principal component analysis

Interval principal component analysis

Kernel principal component analysis

Least squares estimation principal component analysis

Matlab principal components analysis

Matrix composition, principal component analysis

Matrix computations principal component analysis

Maximum likelihood principal components analysis

Multi-way principal components analysis

Multilinear regression and principal component analysis

Multivariate data, principal-components analysis

Multivariate principal-components analysis

Multivariate statistical techniques principal components analysis

Partition principal component analysis

Pattern recognition factor analysis principal components

Pattern recognition principal components analysis

Peak using principal component analysi

Preprocessing using principal components analysis

Principal Component Analysis Hotelling

Principal Component Analysis and Related Methods

Principal Component Analysis decomposition

Principal Component Analysis error statistics

Principal Component Analysis explained variance

Principal Component Analysis in Data Reconciliation

Principal Component Analysis of the Sensitivity Matrix

Principal Component Analysis variable contribution

Principal Component Linear Discriminant Analysis

Principal Components Analysis , essential

Principal Components Analysis resins

Principal analysis

Principal component analysi

Principal component analysi

Principal component analysis (PCA scores

Principal component analysis , pattern recognition technique

Principal component analysis -based

Principal component analysis -based application

Principal component analysis -based approach

Principal component analysis -based theory

Principal component analysis Euclidean distance

Principal component analysis Filament

Principal component analysis INDEX

Principal component analysis Protein

Principal component analysis Pyrolysis

Principal component analysis Reproducibility

Principal component analysis Sample handling

Principal component analysis Scree plots

Principal component analysis Sediment

Principal component analysis Sulfones

Principal component analysis Surface water

Principal component analysis and PARAFAC

Principal component analysis application

Principal component analysis applied to IR data compression

Principal component analysis basis factors

Principal component analysis biplots

Principal component analysis chemometric applications

Principal component analysis coffee

Principal component analysis covariance

Principal component analysis derivation, algorithms

Principal component analysis description

Principal component analysis descriptors, chemical structures

Principal component analysis disadvantage

Principal component analysis example

Principal component analysis food data

Principal component analysis geometry

Principal component analysis linear dimensionality reduction

Principal component analysis mapping

Principal component analysis maximum variance directions

Principal component analysis measurement errors

Principal component analysis metabolites

Principal component analysis methodology

Principal component analysis microarray data

Principal component analysis molecular similarity, chemical space

Principal component analysis monoterpene hydrocarbons from

Principal component analysis multivariate statistical process control

Principal component analysis multivariate technique

Principal component analysis networks

Principal component analysis noise factors

Principal component analysis of kinetic models

Principal component analysis of matrix

Principal component analysis of the

Principal component analysis orthogonal projection

Principal component analysis pretreatments

Principal component analysis regularity

Principal component analysis representation

Principal component analysis scaling

Principal component analysis spectroscopy

Principal component analysis squared elements

Principal component analysis structure

Principal component analysis studies

Principal component analysis unsupervised

Principal component analysis, 25 apple

Principal component analysis, analytical

Principal component analysis, unfolded

Principal component and factor analysis

Principal component regression analysis

Principal component regression chemometrical analysis

Principal component statistical analysis

Principal components analysis (PCA

Principal components analysis advantages

Principal components analysis case studies

Principal components analysis chemical factors

Principal components analysis consensus

Principal components analysis cross-validation

Principal components analysis divisions

Principal components analysis dynamic

Principal components analysis graphical representation

Principal components analysis hierarchical

Principal components analysis history

Principal components analysis loadings

Principal components analysis matrix

Principal components analysis method

Principal components analysis multivariate data matrices

Principal components analysis problem

Principal components analysis rainwater

Principal components analysis results

Principal components analysis scores

Principal components analysis sites

Principal components analysis vector

Principal-component analysis classical techniques

Principal-component discriminant analysis

Robust principal component analysis

Robust principal component analysis ROBPCA)

Robust principal component analysis applications

Singular value decomposition principal component analysis

Some general aspects on the use of principal components analysis

Spherical Principal Component Analysis

Statistical methods principal components analysis

Supervised principal-component analysis

Three-way principal components analysis

Unfold principal component analysis

© 2024 chempedia.info