Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance/covariance matrix

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

The off-diagonal elements of the variance-covariance matrix represent the covariances between different parameters. From the covariances and variances, correlation coefficients between parameters can be calculated. When the parameters are completely independent, the correlation coefficient is zero. As the parameters become more correlated, the correlation coefficient approaches a value of +1 or -1. [Pg.102]

The important underlying components of protein motion during a simulation can be extracted by a Principal Component Analysis (PGA). It stands for a diagonalization of the variance-covariance matrix R of the mass-weighted internal displacements during a molecular dynamics simulation. [Pg.73]

Define the variance-covariance matrix for this vector to be Q = B (BJB B... [Pg.2572]

The standard way to answer the above question would be to compute the probability distribution of the parameter and, from it, to compute, for example, the 95% confidence region on the parameter estimate obtained. We would, in other words, find a set of values h such that the probability that we are correct in asserting that the true value 0 of the parameter lies in 7e is 95%. If we assumed that the parameter estimates are at least approximately normally distributed around the true parameter value (which is asymptotically true in the case of least squares under some mild regularity assumptions), then it would be sufficient to know the parameter dispersion (variance-covariance matrix) in order to be able to compute approximate ellipsoidal confidence regions. [Pg.80]

A special form of cross-product matrix is the variance-covariance matrix (or covariance matrix for short) Cp, which is based on the column-centered matrix Yp derived from an original matrix X ... [Pg.49]

The power algorithm [21] is the simplest iterative method for the calculation of latent vectors and latent values from a square symmetric matrix. In contrast to NIPALS, which produces an orthogonal decomposition of a rectangular data table X, the power algorithm decomposes a square symmetric matrix of cross-products X which we denote by C. Note that Cp is called the column-variance-covariance matrix when the data in X are column-centered. [Pg.138]

Once the nxn variance-covariance matrix C has been derived one can apply eigenvalue decomposition (EVD) as explained in Section 31.4.2. In this case we obtain ... [Pg.148]

In eq. (33.3) and (33.4) x, and Xj are the sample mean vectors, that describe the location of the centroids in m-dimensional space and S is the pooled sample variance-covariance matrix of the training sets of the two classes. [Pg.217]

The use of a pooled variance-covariance matrix implies that the variance-covariance matrices for both populations are assumed to be the same. The consequences of this are discussed in Section 33.2.3. [Pg.217]

A simple two-dimensional example concerns the data from Table 33.1 and Fig. 33.9. The pooled variance-covariance matrix is obtained as [K K -1- L L]/(n, + 3 - 2), i.e. by first computing for each class the centred sum of squares (for the diagonal elements) and the cross-products between variables (for the other... [Pg.217]

Computation of the cross-product term in the pooled variance-covariance matrix for the data of Table 33. [Pg.219]

When all are considered equal, this means that they can be replaced by S, the pooled variance-covariance matrix, which is the case for linear discriminant analysis. The discrimination boundaries then are linear and is given by... [Pg.221]

As stated earlier, LDA requires that the variance-covariance matrices of the classes being considered can be pooled. This is only so when these matrices can be considered to be equal, in the same way that variances can only be pooled, when they are considered equal (see Section 2.1.4.4). Equal variance-covariance means that the 95% confidence ellipsoids have an equal volume (variance) and orientation in space (covariance). Figure 33.10 illustrates situations of unequal variance or covariance. Clearly, Fig. 33.1 displays unequal variance-covariance, so that one must expect that QDA gives better classification, as is indeed the case (Fig. 33.2). When the number of objects is smaller than the number of variables m, the variance-covariance matrix is singular. Clearly, this problem is more severe for QDA (which requires m < n ) than for LDA, where the variance-covariance matrix is pooled and therefore the number of objects N is the sum of all objects... [Pg.222]

One expects that during the measurement-prediction cycle the confidence in the parameters improves. Thus, the variance-covariance matrix needs also to be updated in each measurement-prediction cycle. This is done as follows [1] ... [Pg.578]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

By way of illustration, the regression parameters of a straight line with slope = 1 and intercept = 0 are recursively estimated. The results are presented in Table 41.1. For each step of the estimation cycle, we included the values of the innovation, variance-covariance matrix, gain vector and estimated parameters. The variance of the experimental error of all observations y is 25 10 absorbance units, which corresponds to r = 25 10 au for all j. The recursive estimation is started with a high value (10 ) on the diagonal elements of P and a low value (1) on its off-diagonal elements. [Pg.580]

The sequence of the innovation, gain vector, variance-covariance matrix and estimated parameters of the calibration lines is shown in Figs. 41.1-41.4. We can clearly see that after four measurements the innovation is stabilized at the measurement error, which is 0.005 absorbance units. The gain vector decreases monotonously and the estimates of the two parameters stabilize after four measurements. It should be remarked that the design of the measurements fully defines the variance-covariance matrix and the gain vector in eqs. (41.3) and (41.4), as is the case in ordinary regression. Thus, once the design of the experiments is chosen... [Pg.580]

Fig. 41.3. Evolution of the diagonal elements of the variance-covariance matrix (P) during the estimation process (see Table 41.1). Fig. 41.3. Evolution of the diagonal elements of the variance-covariance matrix (P) during the estimation process (see Table 41.1).
Influence of initial values of the diagonal values of the variance-covariance matrix P(0) and the variance of the experimental error on the gain vector and the Innovation sequence (see Table 41.1 for the experimental values, > )... [Pg.584]


See other pages where Variance/covariance matrix is mentioned: [Pg.102]    [Pg.16]    [Pg.514]    [Pg.714]    [Pg.2546]    [Pg.2546]    [Pg.2546]    [Pg.2571]    [Pg.2572]    [Pg.156]    [Pg.161]    [Pg.165]    [Pg.91]    [Pg.97]    [Pg.98]    [Pg.99]    [Pg.100]    [Pg.547]    [Pg.61]    [Pg.62]    [Pg.79]    [Pg.148]    [Pg.221]    [Pg.228]    [Pg.578]    [Pg.579]    [Pg.579]    [Pg.581]    [Pg.581]    [Pg.582]   
See also in sourсe #XX -- [ Pg.101 ]

See also in sourсe #XX -- [ Pg.49 , Pg.578 ]

See also in sourсe #XX -- [ Pg.577 ]

See also in sourсe #XX -- [ Pg.119 ]

See also in sourсe #XX -- [ Pg.73 , Pg.74 , Pg.75 , Pg.76 , Pg.79 , Pg.84 , Pg.111 , Pg.129 , Pg.154 ]

See also in sourсe #XX -- [ Pg.419 ]

See also in sourсe #XX -- [ Pg.68 ]

See also in sourсe #XX -- [ Pg.65 , Pg.66 , Pg.253 , Pg.426 , Pg.481 ]

See also in sourсe #XX -- [ Pg.106 , Pg.130 , Pg.175 , Pg.190 , Pg.210 , Pg.225 , Pg.238 , Pg.350 ]

See also in sourсe #XX -- [ Pg.250 ]

See also in sourсe #XX -- [ Pg.498 ]

See also in sourсe #XX -- [ Pg.113 ]

See also in sourсe #XX -- [ Pg.175 ]




SEARCH



Covariance

Covariance matrix

Covariant

Covariates

Covariation

Data variance-covariance matrix

Pooled variance-covariance matrix

The Variance-Covariance Matrix

Variance matrix

Variance-covariance

Variance-covariance matrix decomposition

Variance-covariance matrix parameters, calculation

© 2024 chempedia.info