Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Score vector

First, one can check whether a randomly compiled test set is within the modeling space, before employing it for PCA/PLS applications. Suppose one has calculated the scores matrix T and the loading matrix P with the help of a training set. Let z be the characteristic vector (that is, the set of independent variables) of an object in a test set. Then, we first must calculate the scores vector of the object (Eq. (14)). [Pg.223]

This implies that the scalar product of a score vector s, with a loading vector allows us to reconstruct the value in the table X. Written in full, this is equivalent... [Pg.102]

Fig. 31.12. Dance-step diagram, illustrating a cycle of the iterative NIPALS algorithm. Step 1 multiplies the score vector t with the data table X, which produces the weight veetor w. Step 2 normalizes w to unit sum of squares. In step 3, X is multiplied by w, yielding an updated t. Fig. 31.12. Dance-step diagram, illustrating a cycle of the iterative NIPALS algorithm. Step 1 multiplies the score vector t with the data table X, which produces the weight veetor w. Step 2 normalizes w to unit sum of squares. In step 3, X is multiplied by w, yielding an updated t.
The application of principal components regression (PCR) to multivariate calibration introduces a new element, viz. data compression through the construction of a small set of new orthogonal components or factors. Henceforth, we will mainly use the term factor rather than component in order to avoid confusion with the chemical components of a mixture. The factors play an intermediary role as regressors in the calibration process. In PCR the factors are obtained as the principal components (PCs) from a principal component analysis (PC A) of the predictor data, i.e. the calibration spectra S (nxp). In Chapters 17 and 31 we saw that any data matrix can be decomposed ( factored ) into a product of (object) score vectors T(nxr) and (variable) loadings P(pxr). The number of columns in T and P is equal to the rank r of the matrix S, usually the smaller of n or p. It is customary and advisable to do this factoring on the data after columncentering. This allows one to write the mean-centered spectra Sq as ... [Pg.358]

The score vector, zx, is found in a two-step operation to guarantee that the covariance of the scores is maximized. Once z, i, i, and qx have been found, the procedure is repeated for the residual matrices ExA and Ev i to find z2,ol2,u2, and q2. This continues until the residuals contain no... [Pg.36]

Calculation of scores as described by Equations 2.20 and 2.21 can be geometrically considered as an orthogonal projection (a linear mapping) of a vector x on to a straight line defined by the loading vector b (Figure 2.15). For n objects, a score vector u is obtained containing the scores for the objects (the values of the linear latent variable for all objects). [Pg.65]

Score vector value of latent variable for each object... [Pg.65]

The last part of Equation 3.2 expresses this orthogonal projection of the data on the latent variable. For all n objects arranged as rows in the matrix X the score vector, f1 of PCI is obtained by... [Pg.74]

All loading vectors are collected as columns in the loading matrix, P, and all score vectors in the score matrix, T (Figure 3.2). [Pg.75]

The PCA scores have a very powerful mathematical property. They are orthogonal to each other, and since the scores are usually centered, any two score vectors are uncorrelated, resulting in a zero correlation coefficient. No other rotation of the coordinate system except PCA has this property. [Pg.75]

NIPALS starts with an initial score vector u, that can be arbitrarily chosen from one of the variables, also the variable with the highest variance has been proposed (Figure 3.12a). Next a first approximation, b, of the corresponding... [Pg.87]

U =Xj Start with an initial score vector, for instance, with an arbitrarily chosen variable j, or the variable with highest variance. [Pg.88]

The NIPALS algorithm is efficient if only a few PCA components are required. Because the deflation procedure increases the uncertainty of following components, the algorithm is not recommended for the computation of many components (Seasholtz et al. 1990). The algorithm fails if convergence is reached already after one cycle in this case another initial value of the score vector has to be tried (Miyashita et al. 1990). [Pg.89]

PCA transforms a data matrix X(n x m)—containing data for n objects with m variables—into a matrix of lower dimension T(n x a). In the matrix T each object is characterized by a relative small number, a, of PCA scores (PCs, latent variables). Score ti of the /th object xt is a linear combination of the vector components (variables) of vector x, and the vector components (loadings) of a PCA loading vector/ in other formulation the score is the result of a scalar product xj p. The score vector tk of PCA component k contains the scores for all n objects T is the score matrix for n objects and a components P is the corresponding loading matrix (see Figure 3.2). [Pg.113]

The remaining task is to robustly estimate the score vectors T that are needed in the above regression. According to the latent variable model (Equation 4.62) for the... [Pg.177]

The score vectors tj and u, are linear projections of the data onto the corresponding... [Pg.178]

Since the goal of SIMCA is to classify a new object jc, a measure for the closeness of the object to the groups needs to be defined. For this purpose, several proposals have been made in the literature. They are based on the orthogonal distance, which represents the Euclidean distance of an object to the PCA space (see Section 3.7.3). First we need to compute the score vector tj of x in theyth PCA space, and using Equation 5.19 and the group center Xj we obtain... [Pg.224]

Figure 9.14 Synoptic comparison of PCI evolution. T1 for acoustic data compared to T1 for the process data 20 February 2001. The score vectors were normalized for the sake of comparison. Figure 9.14 Synoptic comparison of PCI evolution. T1 for acoustic data compared to T1 for the process data 20 February 2001. The score vectors were normalized for the sake of comparison.
Earlier it was mentioned, and demonstrated using the Fisher Iris example (Section 12.2.5), that the PCA scores (T) can be used to assess relationships between samples in a data set. Similarly, the PCA loadings (P) can be used to assess relationships between variables in a data set. For PCA, the first score vector and the first loading vector make up the first principal component (PC), which represents the most dominant source of variability in the original x data. Subsequent pairs of scores and loadings ([score vector 2, loading vector 2], [score vector 3, loading vector 3]...) correspond to the next most dominant sources of variability. [Pg.398]

This regressaon vector can then be used to predict the concentration in an unknown sample using a two-step process. First, given the measurement of the unknown and the V and S from the calibration, the score vector for the unknown ) is obtained using Equation 531 ... [Pg.146]

This equation shows that even though the PCR model is written to relate the score vectors to concentrations (Equation 5.29), it is a linear combination of all variables liiat are being used to estimate the concentrations (as seen in Equation 5 19). [Pg.146]

Values Whereas the sample leverage is based on the score vector of a sample, the value is based on the residual spectrum. The values are a convenient screening diagnostic for identifj ing samples that need closer examination. [Pg.160]

Leverage is a measure of the location of a prediction sample in the calibration measurement row space. A high leverage indicates a sample that has an unusual score vector relative to the calibration samples. [Pg.162]


See other pages where Score vector is mentioned: [Pg.107]    [Pg.332]    [Pg.333]    [Pg.337]    [Pg.358]    [Pg.42]    [Pg.343]    [Pg.92]    [Pg.66]    [Pg.88]    [Pg.88]    [Pg.88]    [Pg.88]    [Pg.93]    [Pg.163]    [Pg.163]    [Pg.170]    [Pg.170]    [Pg.170]    [Pg.173]    [Pg.175]    [Pg.175]    [Pg.177]    [Pg.179]    [Pg.399]    [Pg.146]    [Pg.151]   
See also in sourсe #XX -- [ Pg.359 , Pg.362 , Pg.368 , Pg.428 ]

See also in sourсe #XX -- [ Pg.273 ]




SEARCH



Score vector analysis

© 2024 chempedia.info