Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance-covariance matrix decomposition

The power algorithm [21] is the simplest iterative method for the calculation of latent vectors and latent values from a square symmetric matrix. In contrast to NIPALS, which produces an orthogonal decomposition of a rectangular data table X, the power algorithm decomposes a square symmetric matrix of cross-products X which we denote by C. Note that Cp is called the column-variance-covariance matrix when the data in X are column-centered. [Pg.138]

Once the nxn variance-covariance matrix C has been derived one can apply eigenvalue decomposition (EVD) as explained in Section 31.4.2. In this case we obtain ... [Pg.148]

The root of these methods is the decomposition of multivariate data into a series of orthogonal factors, eigenvectors, also called abstract factors. These factors are linear combinations of a set of orthogonal basis vectors that are the eigenvectors of the variance-covariance matrix (X X) of the original data matrix. The eigenvalues of this variance-covariance matrix are the solutions Kz,. . ., of the determinantal equation... [Pg.175]

Let u be a vector valued stochastic variable with dimension D x 1 and with covariance matrix Ru of size D x D. The key idea is to linearly transform all observation vectors, u , to new variables, z = W Uy, and then solve the optimization problem (1) where we replace u, by z . We choose the transformation so that the covariance matrix of z is diagonal and (more importantly) none if its eigenvalues are too close to zero. (Loosely speaking, the eigenvalues close to zero are those that are responsible for the large variance of the OLS-solution). In order to liiid the desired transformation, a singular value decomposition of /f is performed yielding... [Pg.888]

Li and Der Kiureghian (1993) introduced a spectral decomposition of the nodal covariance matrix. They showed that the maximum error of the KL expansion is not always smaller than the error of Kriging for a given number of retained terms. The point-wise variance error estimator of the KL expansion for a given order of truncation is smaller than the error of Kriging in the interior of the discretization domain but larger at the boundaries. Note however that the... [Pg.3473]

There are essentially two approaches for robust PCA the first is based on PCA on a robust covariance matrix, which is rather straightforward as the PCs are the eigenvectors of the covariance matrix. Different robust estimators of covariance matrix may be adopted (MVT [92], MVE and MCD [93]) but the decomposition algorithm is the same. The second approach is based on projection pursuit (PP), by using a projection aimed at maximizing a robust measure of scale, that is, in a PP algorithm, the direction with maximum robust variance of the projected data is pursued here different search algorithms are proposed. [Pg.122]


See other pages where Variance-covariance matrix decomposition is mentioned: [Pg.214]    [Pg.214]    [Pg.38]    [Pg.336]    [Pg.115]    [Pg.278]    [Pg.85]    [Pg.58]    [Pg.59]    [Pg.353]    [Pg.203]    [Pg.124]    [Pg.166]    [Pg.146]   
See also in sourсe #XX -- [ Pg.214 ]




SEARCH



Covariance

Covariance matrix

Covariant

Covariates

Covariation

Variance matrix

Variance-covariance

Variance-covariance matrix

© 2024 chempedia.info