# SEARCH

** Data variance-covariance matrix **

** Off-diagonal elements of variance-covariance matrix **

** Pooled variance-covariance matrix **

** The Variance-Covariance Matrix **

Once the nxn variance-covariance matrix C has been derived one can apply eigenvalue decomposition (EVD) as explained in Section 31.4.2. In this case we obtain [Pg.148]

The use of a pooled variance-covariance matrix implies that the variance-covariance matrices for both populations are assumed to be the same. The consequences of this are discussed in Section 33.2.3. [Pg.217]

The off-diagonal elements of the variance-covariance matrix represent the covariances between different parameters. From the covariances and variances, correlation coefficients between parameters can be calculated. When the parameters are completely independent, the correlation coefficient is zero. As the parameters become more correlated, the correlation coefficient approaches a value of +1 or -1. [Pg.102]

A special form of cross-product matrix is the variance-covariance matrix (or covariance matrix for short) Cp, which is based on the column-centered matrix Yp derived from an original matrix X [Pg.49]

Computation of the cross-product term in the pooled variance-covariance matrix for the data of Table 33. [Pg.219]

Fig. 41.3. Evolution of the diagonal elements of the variance-covariance matrix (P) during the estimation process (see Table 41.1). |

When all are considered equal, this means that they can be replaced by S, the pooled variance-covariance matrix, which is the case for linear discriminant analysis. The discrimination boundaries then are linear and is given by [Pg.221]

A simple two-dimensional example concerns the data from Table 33.1 and Fig. 33.9. The pooled variance-covariance matrix is obtained as [K K -1- L L]/(n, + 3 - 2), i.e. by first computing for each class the centred sum of squares (for the diagonal elements) and the cross-products between variables (for the other [Pg.217]

One expects that during the measurement-prediction cycle the confidence in the parameters improves. Thus, the variance-covariance matrix needs also to be updated in each measurement-prediction cycle. This is done as follows [1] [Pg.578]

In eq. (33.3) and (33.4) x, and Xj are the sample mean vectors, that describe the location of the centroids in m-dimensional space and S is the pooled sample variance-covariance matrix of the training sets of the two classes. [Pg.217]

The important underlying components of protein motion during a simulation can be extracted by a Principal Component Analysis (PGA). It stands for a diagonalization of the variance-covariance matrix R of the mass-weighted internal displacements during a molecular dynamics simulation. [Pg.73]

As stated earlier, LDA requires that the variance-covariance matrices of the classes being considered can be pooled. This is only so when these matrices can be considered to be equal, in the same way that variances can only be pooled, when they are considered equal (see Section 2.1.4.4). Equal variance-covariance means that the 95% confidence ellipsoids have an equal volume (variance) and orientation in space (covariance). Figure 33.10 illustrates situations of unequal variance or covariance. Clearly, Fig. 33.1 displays unequal variance-covariance, so that one must expect that QDA gives better classification, as is indeed the case (Fig. 33.2). When the number of objects is smaller than the number of variables m, the variance-covariance matrix is singular. Clearly, this problem is more severe for QDA (which requires m < n ) than for LDA, where the variance-covariance matrix is pooled and therefore the number of objects N is the sum of all objects [Pg.222]

The power algorithm [21] is the simplest iterative method for the calculation of latent vectors and latent values from a square symmetric matrix. In contrast to NIPALS, which produces an orthogonal decomposition of a rectangular data table X, the power algorithm decomposes a square symmetric matrix of cross-products X which we denote by C. Note that Cp is called the column-variance-covariance matrix when the data in X are column-centered. [Pg.138]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

The standard way to answer the above question would be to compute the probability distribution of the parameter and, from it, to compute, for example, the 95% confidence region on the parameter estimate obtained. We would, in other words, find a set of values h such that the probability that we are correct in asserting that the true value 0 of the parameter lies in 7e is 95%. If we assumed that the parameter estimates are at least approximately normally distributed around the true parameter value (which is asymptotically true in the case of least squares under some mild regularity assumptions), then it would be sufficient to know the parameter dispersion (variance-covariance matrix) in order to be able to compute approximate ellipsoidal confidence regions. [Pg.80]

See also in sourсe #XX -- [ Pg.49 , Pg.578 ]

See also in sourсe #XX -- [ Pg.577 ]

See also in sourсe #XX -- [ Pg.119 ]

See also in sourсe #XX -- [ Pg.73 , Pg.74 , Pg.75 , Pg.76 , Pg.79 , Pg.84 , Pg.111 , Pg.129 , Pg.154 ]

See also in sourсe #XX -- [ Pg.419 ]

See also in sourсe #XX -- [ Pg.68 ]

See also in sourсe #XX -- [ Pg.65 , Pg.66 , Pg.253 , Pg.426 , Pg.481 ]

See also in sourсe #XX -- [ Pg.106 , Pg.130 , Pg.175 , Pg.190 , Pg.210 , Pg.225 , Pg.238 , Pg.350 ]

See also in sourсe #XX -- [ Pg.250 ]

See also in sourсe #XX -- [ Pg.498 ]

See also in sourсe #XX -- [ Pg.113 ]

See also in sourсe #XX -- [ Pg.175 ]

** Data variance-covariance matrix **

** Off-diagonal elements of variance-covariance matrix **

** Pooled variance-covariance matrix **

** The Variance-Covariance Matrix **

© 2019 chempedia.info