Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression stepping

Step 4 Nevertheless, so far we only extracted information from the X- and Y-blocks. Now a regression step is needed to obtain a predictive model. This is achieved by establishing an inner relationship between u (the scores representing the concentrations of the analytes we want to predict) and t (the scores representing the spectral absorbances) that we just calculated for the X- and Y-blocks. The simplest one is an ordinary regression (note that the regression coefficient is just a scalar because we are relating two vectors) ... [Pg.188]

Principal component regression is simply an extension of the PCA data compression method described earlier (Section 8.2.6.2), where a regression step is added after the data... [Pg.259]

Next, we can construct outlier maps as in Sections 6.5.5 and 6.4.2.3. ROBPCA yields the PC A outlier map displayed in Figure 6.12a. We see that there are no PCA leverage points, but there are some orthogonal outliers, the largest being 23, 7, and 20. The result of the regression step is shown in Figure 6.12b. It exposes the robust distances of the residuals (or the standardized residuals if q = 1) vs. the score... [Pg.200]

It is important to recall at this point that k comprises only the nonlinear parameters, i.e., the rate constants. The linear parameters, i.e., the elements of the matrix A containing the molar absorptivities, are solved in a separate linear regression step, as described earlier in Equation 7.9 and Equation 7.10. [Pg.233]

Principle components regression (PCR) is one of the supervised methods commonly employed to analyze NMR data. This method is typically used for developing a quantitative model. In simple terms, PCR can be thought of as PCA followed by a regression step. In PCR, the scores matrix (T) obtained in PCA (Section 3.1) is related to an external variable in a least squares sense. Recall that the data matrix can be reconstructed or estimated using a limited number of factors (/ffact), such that only the fc = Mfaet PCA loadings (l fc) are required to describe the data matrix. Eq. (15) can be reconstructed as... [Pg.61]

To include information about process dynamics, lagged variables can be included in X. The (auto)correlograms of all x variables should be developed to determine first how many lagged values are relevant for each variable. Then the data matrix should be augmented accordingly and used to determine the principal components that will be used in the regression step. [Pg.79]

As mentioned previously, one of the main advantages of PLS is that the resulting spectral vectors are directly related to the constituents of interest. This is entirely unlike PCR, where the vectors merely represent the most common spectral variations in the data, completely ignoring their relation to the constituents of interest until the final regression step. [Pg.43]

The restriction on the columns of T to be in the column-space of X is not active. Regardless of P, T should minimize X - TP 2. This is a simple regression step, which results in T = XP(P P)-1. Hence, each column of T is automatically in the column space of X. [Pg.55]

PCA step of PCR with the regression step. Latent variables, like PCs, are calculated to explain most of the variance in the x set while remaining orthogonal to one another. Thus, the first latent variable (LVi) will explain most of the variance in the independent set, LV2 the next largest amount of variance and so on. The important difference between PLS and PCR is that the latent variables are constructed so as to maximize their correlation with the dependent variable. Unlike PCR equations where the PCs do not enter in any particular order (see eqns 7.6 to 7.8) the latent variables will enter PLS equations in the order one, two, three, etc. The properties of latent variables are ... [Pg.154]

Now this is actually a bit of an oversimplification. Unlike PCR, PLS is a one-step process. In other words, there is no separate regression step. Instead, PLS performs the decomposition on both the spectral and concentration data simultaneously. As each new factor is calculated for the model, the scores are swapped before the contribution of the factor is removed from the raw data. The newly reduced data matrices are then used to calcu-... [Pg.116]


See other pages where Regression stepping is mentioned: [Pg.167]    [Pg.199]    [Pg.400]    [Pg.195]    [Pg.83]    [Pg.204]    [Pg.206]    [Pg.398]    [Pg.399]    [Pg.2750]    [Pg.220]    [Pg.42]    [Pg.56]    [Pg.114]    [Pg.142]    [Pg.286]    [Pg.170]    [Pg.595]    [Pg.239]    [Pg.104]    [Pg.445]    [Pg.127]    [Pg.211]    [Pg.212]    [Pg.782]    [Pg.312]    [Pg.346]   


SEARCH



© 2024 chempedia.info