Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Compression variable

PCA is a data compression method that reduces a set of data collected on M variables over N samples to a simpler representation that uses a much fewer number (A M) of compressed variables , called principal components (or PCs). The mathematical model for the PCA method is provided below ... [Pg.362]

Figure 12.3 provides a scatter plot of the first two PC loadings, which can be used to roughly interpret the two new compressed variables in terms of the original four variables. In this case, it appears that the first PC is a descriptor between the sepal width and the other three x variables, while the second PC is a descriptor of the two sepal measurements only. The plot also shows that the petal width and petal length... [Pg.364]

Like MLR, PCR [63] is an inverse calibration method. However, in PCR, the compressed variables (or PCs) from PCA are used as variables in the multiple linear regression model, rather than selected original X variables. In PCR, PCA is first done on the calibration x data, thus generating PCA scores (T) and loadings (P) (see Section 12.2.5), then a multiple linear regression is carried out according to the following model ... [Pg.383]

The difference between PLS and PCR is the manner in which the x data are compressed. Unlike the PCR method, where x data compression is done solely on the basis of explained variance in X followed by subsequent regression of the compressed variables (PCs) to y (a simple two-step process), PLS data compression is done such that the most variance in both x and y is explained. Because the compressed variables obtained in PLS are different from those obtained in PCA and PCR, they are not principal components (or PCs) Instead, they are often referred to as latent variables (or LVs). [Pg.385]

The PLS regression method can be extended to accommodate problems where multiple y variables must be predicted. This extension, commonly called PLS2, operates on the same principle as PLS, where the goal is to hnd compressed variables (latent variables) that sequentially describe the most variance in both the x and y data [1]. However, the algorithm for the PLS2 method is slightly different than that of the PLS method, in that one must now account for covariance between different y variables as well as covariance between x variables. [Pg.387]

Initial Polymorph Compression Variables (Force/Dwell Time) Conversion (%)... [Pg.547]

Like PCR, the compressed variables in PLS have the mathematical property of orthogonality, and the technical and practical advantages thereof. PLS models can also be built knowing only the property of interest for the calibration samples. [Pg.262]

There are some distinct advantages of the PLS regression method over the PCR method. Because Y-data are used in the data compression step, it is often possible to build PLS models that are simpler (i.e. require fewer compressed variables), yet just as effective as more complex PCR models built from the same calibration data. In the process analytical world, simpler models are more stable over time and easier to maintain. There is also a small advantage of PLS for qualitative interpretative purposes. Even though the latent variables in PLS are still abstract, and rarely express pure chemical or physical phenomena, they are at least more relevant to the problem than the PCs obtained from PCR. [Pg.263]

In order to handle multiple Y-variables, an extension of the PLS regression method discussed earlier, called PLS-2, must be used.1 The algorithm for the PLS-2 method is quite similar to the PLS algorithms discussed earlier. Just like the PLS method, this method determines each compressed variable (latent variable) based on the maximum variance explained in both X and Y. The only difference is that Y is now a matrix that contains several Y-variables. For PLS-2, the second equation in the PLS model (Equation 8.36) can be replaced with the following ... [Pg.292]

There are several distinctions of the PLS-DA method versus other classification methods. First of all, the classification space is unique. It is not based on X-variables or PCs obtained from PCA analysis, but rather the latent variables obtained from PLS or PLS-2 regression. Because these compressed variables are determined using the known class membership information in the calibration data, they should be more relevant for separating the samples by their classes than the PCs obtained from PCA. Secondly, the classification rule is based on results obtained from quantitative PLS prediction. When this method is applied to an unknown sample, one obtains a predicted number for each of the Y-variables. Statistical tests, such as the /-test discussed earlier (Section 8.2.2), can then be used to determine whether these predicted numbers are sufficiently close to 1 or 0. Another advantage of the PLS-DA method is that it can, in principle, handle cases where an unknown sample belongs to more than one class, or to no class at all. [Pg.293]

Like the PLS-DA method, and many of the quantitative modeling methods discussed above, the LDA method is susceptible to overfitting through the use of too many compressed variables (or LDs, in this case). Furthermore, as in PLS-DA, it assumes that the classes can be linearly separated in the classification space. As a result, it can also be hindered by strong natural separation of samples that is irrelevant to any of the known classes. [Pg.294]

Three commonly used ANN methods for classification are the perceptron network, the probabilistic neural network, and the learning vector quantization (LVQ) networks. Details on these methods can be found in several references.57,58 Only an overview of them will be presented here. In all cases, one can use all available X-variables, a selected subset of X-variables, or a set of compressed variables (e.g. PCs from PCA) as inputs to the network. Like quantitative neural networks, the network parameters are estimated by applying a learning rule to a series of samples of known class, the details of which will not be discussed here. [Pg.296]

Table 4.1. Val ues of interest for the reaction-progress variable X, the compression variable i, the Br0nsted coefficient a, and their significance. Table 4.1. Val ues of interest for the reaction-progress variable X, the compression variable i, the Br0nsted coefficient a, and their significance.
How can parsimonious models be constructed There are several possible approaches, however in this chapter a combination of data compression and variable selection will be used. Data compression achieves parsimony through the reduction of the redundancy in the data representation. However, compression without involving information about the dependent variables will not be optimal. It is therefore suggested that variable selection should be performed on the compressed variables and not on the original variables which is the usual strategy. Variable selection has been applied with success in fields such as analytical chemistry [1-4], quantitative structure-activity relationships (QSAR) [5-8] and analytical biotechnology [9-11]. [Pg.352]

The first effect is a long-time, progressive cumulation of discretization errors (and possibly truncation errors if compressed-variable representation is used). This cumulative effect is quite harmless for trajectories along which the residence time of particles is low, in the sense of (t/ /At)Au u. ... [Pg.531]

The Video tab in the Custom Settings dialog for the. mov format contains all of the codec and compression variables. [Pg.242]


See other pages where Compression variable is mentioned: [Pg.363]    [Pg.385]    [Pg.387]    [Pg.244]    [Pg.249]    [Pg.260]    [Pg.262]    [Pg.264]    [Pg.298]    [Pg.316]    [Pg.446]    [Pg.1161]    [Pg.99]    [Pg.480]   
See also in sourсe #XX -- [ Pg.1058 ]




SEARCH



Compression of gases variable heat capacity

Compression wood variability

Variable Rate and Pressure Filtration for Compressible Cakes

Variable compressible cakes

© 2024 chempedia.info