Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

PCAS method

Bota et al. [84] used the PCA method to select the optimum solvent system for TLC separation of seven polycyclic aromatic hydrocarbons. Each solute is treated as a point in a space defined by its retention coordinates along the different solvent composition axes. The PCA method enables the selection of a restricted set of nine available mobile phase systems, and it is a useful graphical tool because scatterplots of loading on planes described by the most important axes will have the effect of separating solvent systems from one other most efficiently. [Pg.94]

PCA is a data compression method that reduces a set of data collected on M variables over N samples to a simpler representation that uses a much fewer number (A M) of compressed variables , called principal components (or PCs). The mathematical model for the PCA method is provided below ... [Pg.362]

When the PCA method is applied to the NIR spectra in Figure 12.16, it is found that three PCs are sufficient to use in the model, and that PCs 1, 2 and 3 explain 52.6%, 20.5%, and 13.6% of the variation in the spectral data, respectively. Figure 12.17 shows a scatter plot of the first two of the three significant PC scores of all 26 foam samples. Note that all of the calibration samples visually agglomerate into four different groups in these first two dimensions of the space. Furthermore, it can be seen that each group corresponds to a single known class. [Pg.393]

The raw data in Table 8.2 are first autoscaled, and then the PCA method is applied. When this is done, it is found that the first two PCs explain almost 96% of the variation in... [Pg.245]

For such outliers, detection and assessment can actually be accomplished using some of the modeling tools themselves.1,3 In this work, the use of PCA and PLS for outlier detection is discussed. Since the PCA method only operates on the X-data, it can be used to detect X-sample and X-variable outliers. The three entities in the PCA model that are most commonly used to detect such outliers are the estimated PCA scores (T), the estimated PCA loadings (P), and the estimated PCA residuals (E), which are calculated from the estimated PCA scores and loadings ... [Pg.279]

Of the three types of outliers listed earlier, there is one type that cannot be detected using the PCA method the Y-sample outlier. This is simply because the PCA method does not use any Y-variable information to build the model. In this case, outlier detection can be done using the PLS regression method. Once a PLS model is built, the Y-residuals (f) can be estimated from the PLS model parameters Tpls and q ... [Pg.281]

The goal of robust PCA methods is to obtain principal components that are not influenced much by outliers. A first group of methods is obtained by replacing the classical covariance matrix with a robust covariance estimator, such as the reweighed MCD estimator [45] (Section 6.3.2). Let us reconsider the Hawkins-Bradu-Kass data in p = 4 dimensions. Robust PCA using the reweighed MCD estimator yields the score plot in Figure 6.7b. We now see that the center is correctly estimated in the... [Pg.187]

Note that classical PCA is not affine equivariant because it is sensitive to a rescaling of the variables. But it is still orthogonally equivariant, which means that the center and the principal components transform appropriately under rotations, reflections, and translations of the data. More formally, it allows transformations XA for any orthogonal matrix A (that satisfies A-1 = A7). Any robust PCA method only has to be orthogonally equivariant. [Pg.188]

As the Xmatrix produced by this problem formulation is structured in meaningful blocks, hierarchical PCA methods provide interesting additional insight regarding the relative importance of the different blocks (i.e. probes) in the analysis. [Pg.58]

Numeric simulation is used to testify the proposed PCA methods. The reason of using numeric simulation instead of experimental data is the generality and flexibility of the simulation approach. With the aid of simulation, one can easily investigate purposefully various situations that are not likely be encountered in a few experiments. The examples based on experimental data for cluster analysis, however, will be presented in the chapter Classification of materials . [Pg.65]

On the other hand, successful identification of bacterial spores has been demonstrated by using Fourier transform infrared photoacoustic and transmission spectroscopy " in conjunction with principal component analysis (PCA) statistical methods. In general, PCA methods are used to reduce and decompose the spectral data into orthogonal components, or factors, which represent the most coimnon variations in all the data. As such, each spectrum in a reference library has an associated score for each factor. These scores can then be used to show clustering of spectra that have common variations, thus forming a basis for group member classification and identification. [Pg.102]

A PCA (36) can be performed on the frill matrix. Because of the structure of the matrix, one can also apply hierarchical PCA methods, such as CPCA (37), to each block. CPCA provides information about the relative importance of each block, i.e., each probe. In addition, the cut-out tool, available in the program GOLPE (38), also offers the possibility to focus the CPCA on a particular region of a binding site. [Pg.287]

The mathematics of what is described above is equivalent to principal component analysis. The ideas of principal component analysis (PCA) go back to Beltrami [1873] and Pearson [1901], They tried to describe the structural part of data sets by lines and planes of best fit in multivariate space. The PCA method was introduced in a rudimentary version by Fisher and Mackenzie [1923], The name principal component analysis was introduced by Hotelling [1933], An early calculation algorithm is given by Muntz [1913], More details can be found in the literature [Jackson 1991, Jolliffe 1986, Stewart 1993],... [Pg.37]

Despite these successes, the main attraction of the PCA method lies in its ability to reduce the dimensionality of a complex problem to manageable proportions. Even when it fails, i.e. the number of significant PC s is equal to the number of original parameters, the method is still providing the information that the original parameters are essentially uncorrelated and already represent a minimum dimensionality. Indeed, this lack of reduction of dimensionality in a PCA can be used to check possible interpretations of the PC s derived from a higher dimensional dataset. We illustrate the use of PCA with three examples. [Pg.139]

We will describe the PCA method following the treatment of Ressler et al. (2000) but with the above notation. PCA can be derived from the singular-value decomposition theorem from linear algebra, which says that any rectangular matrix can be decomposed as follows... [Pg.382]

As a demonstration of the PCA method, we have analyzed simulated data produced by summing together measured spectra from three compounds (y-MnOOH, MnS and ) in various proportions and adding noise. The spectra are shown in Figure 18a. Visual inspection shows that there is more than one component. If one plots the spectra without the vertical offsets shown, one finds that there are no values of the abscissa for which all the curves meet, as would be the case if there were only two components. [Pg.384]

Therefore, visual inspection tells one that there are at least three components, but does not yield more information. The PCA method yields numbers for the singular values Xa and the indicator IND as shown in Table 2. [Pg.386]

The PCA method transforms a given set of principal components that are uncorrelated to each other. The principal component model is described by Equation (7). [Pg.478]

The objective of this study is to use the PCA method to classify predictor variables according to their interrelation, and to obtain parsimonious prediction model (i.e., model that depend on as few variables as necessary) for WQI with other physico-chemical and biological data as predictor variables to model the water quality of the Langat river. For this purpose, principal component scores of 23 physico-chemical and biological water quality parameters were generated and selected appropriately as input variables in ANN models for predicting WQI. [Pg.273]

Principal component analysis (PCA) is known as a statistical distribution method. For example, if we assume four parameters, these can be shown on a four-dimensional graph. However, doing so is very difficult and complicated. However, the PCA method can reduce the parameters to two dimensions by extracting significant data from various parameters. - Sensor arrays can be used to check for CWAs of different concentrations. The PCA method has been adapted here to classify chemical agents. Figure 14.22 shows the PCA... [Pg.486]

The simulants (DMMP, DCP, acetonitrile, and DCM) can be classified employing the PCA technique, which is acknowledged as a statistical distribution method. The polymer membrane SAW sensor arrays show fair selectivity to simulant gases by adapting the PCA method. [Pg.488]


See other pages where PCAS method is mentioned: [Pg.82]    [Pg.414]    [Pg.145]    [Pg.188]    [Pg.291]    [Pg.341]    [Pg.775]    [Pg.105]    [Pg.245]    [Pg.211]    [Pg.481]    [Pg.84]    [Pg.86]    [Pg.173]    [Pg.47]    [Pg.67]    [Pg.478]    [Pg.238]    [Pg.742]    [Pg.306]    [Pg.151]    [Pg.301]    [Pg.382]    [Pg.352]    [Pg.411]    [Pg.249]    [Pg.412]    [Pg.532]    [Pg.101]   


SEARCH



PCA

© 2024 chempedia.info