Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data variance-covariance matrix

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

The power algorithm [21] is the simplest iterative method for the calculation of latent vectors and latent values from a square symmetric matrix. In contrast to NIPALS, which produces an orthogonal decomposition of a rectangular data table X, the power algorithm decomposes a square symmetric matrix of cross-products X which we denote by C. Note that Cp is called the column-variance-covariance matrix when the data in X are column-centered. [Pg.138]

A simple two-dimensional example concerns the data from Table 33.1 and Fig. 33.9. The pooled variance-covariance matrix is obtained as [K K -1- L L]/(n, + 3 - 2), i.e. by first computing for each class the centred sum of squares (for the diagonal elements) and the cross-products between variables (for the other... [Pg.217]

Computation of the cross-product term in the pooled variance-covariance matrix for the data of Table 33. [Pg.219]

Important methods of data analysis base on evaluation of the covariance matrix (variance-covariance matrix)... [Pg.256]

The basis for calculating the correlation between two variables xj and xk is the covariance covariance matrix (dimension m x m), which is a quadratic, symmetric matrix. The cases j k (main diagonal) are covariances between one and the same variable, which are in fact the variances o-jj of the variables Xj for j = 1,..., m (note that in Chapter 1 variances were denoted as variance—covariance matrix (Figure 2.7). Matrix X refers to a data population of infinite size, and should not be confused with estimations of it as described in Section 2.3.2, for instance the sample covariance matrix C. [Pg.53]

FIGURE 2.9 Basic statistics of multivariate data and covariance matrix. xT, transposed mean vector vT, transposed variance vector vXOtal. total variance (sum of variances vb. .., vm). C is the sample covariance matrix calculated from mean-centered X. [Pg.55]

Calculate the variance-covariance matrix associated with the straight line relationship y, = Po + PiA i, + r, for the following data (see Section 11.2 for a definition of D) ... [Pg.129]

Standard deviations in unit-cell parameters may be calculated analytically by error propagation. In these programs, however, the Jacobian of the transformation from Sj,. .., s6 to unit-cell parameters and volume is evaluated numerically and used to transform the variance-covariance matrix of Si,. .., s6 into the variances of the cell parameters and volume from which standard deviations are calculated. If suitable standard deviations are not obtained for certain of the unit cell parameters, it is easy to program the computer to measure additional reflections which strongly correlate with the desired parameters, and repeat the final calculations with this additional data. [Pg.111]

Figure 12.18 shows a sums of squares and degrees of freedom tree for the data of Table 12.4 and the model of Equation 12.32. The significance of the parameter estimates may be obtained from Equation 10.66 using sr2 and (X X) l to obtain the variance-covariance matrix. The (X X) x matrix for the present example is... [Pg.244]

From the definition of d the variance-covariance matrix % is evaluated, taking into account the variance-covariance matrices of the input data x, and of the instrument readings yt [1]. [Pg.228]

For convenience, we normalized the univariate normal distribution so that it had a mean of zero and a standard deviation of one (see Section 3.1.2, Equation 3.5 and Equation 3.6). In a similar fashion, we now define the generalized multivariate squared distance of an object s data vector, x , from the mean, ju, where 2 is the variance-covariance matrix (described later) ... [Pg.52]

The most common way to reexpress a data set in chemometrics makes use of the principal components of the data. Here, the data are expressed in terms of the components of the variance-covariance matrix of the data. To get a variance-covariance matrix, we need not one spectrum, as shown in Figure 10.1, but a set of similar spectra for which the same underlying effects are operative. That is, we must have the same true signal and the same noise effects. Neither the exact contributions of signal nor the noise need be identical from spectrum to spectrum, but the same basic effects should be present over the set of data if we are to use variance-covariance matrices to discern how to retain signal and attenuate noise. [Pg.383]

Instead of using raw data, it is possible to use the PCs of the data. This acts as a form of variable reduction, but also simplifies the distance measures, because the variance-covariance matrix will only contain nonzero elements on the diagonals. The expressions for Mahalanobis distance and linear discriminant functions simplify dramatically. [Pg.242]

The first attempt at estimating interindividual pharmacokinetic variability without neglecting the difficulties (data imbalance, sparse data, subject-specific dosing history, etc.) associated with data from patients undergoing drug therapy was made by Sheiner et al. " using the Non-linear Mixed-effects Model Approach. The vector 9 of population characteristics is composed of all quantities of the first two moments of the distribution of the parameters the mean values (fixed effects), and the elements of the variance-covariance matrix that characterize random effects.f " " ... [Pg.2951]

The steps involved in the algebraic calculation of the covariance between sodium and potassium concentrations from Table 7 are shown in Table 8. The complete variance-covariance matrix for our data is given in Table 9. [Pg.17]

For the data the variance-covariance matrix, COV, is square, the number of rows and number of columns are the same, and the matrix is symmetric. For a symmetric matrix, Xy-Xji, and some pairs of entries are duplicated. The covariance between, say, sodium and potassium is identical to that between potassium and sodium. The variance-covariance matrix is said to have diagonal... [Pg.17]

Before proceeding to examine how principal components are calculated, it is worthwhile considering further a graphical interpretation of their structure and characteristics. From our heart tissue, trace metal data, the variance of chromium concentration is 3.07, the variance of nickel concentration is 2.43, and their covariance is 2.47. This variance-covariance structure is represented by the variance-covariance matrix. [Pg.70]

Thus, principal components can be defined as the eigenvectors of a variance-covariance matrix. They provide the direction of new axes (new variables) on to which data can be projected. The size, or length, of these new axes containing our projected data is proportional to the variance of the new variable. [Pg.71]

The extraction of the eigenvectors from a symmetric data matrix forms the basis and starting point of many multivariate chemometric procedures. The way in which the data are preprocessed and scaled, and how the resulting vectors are treated, has produced a wide range of related and similar techniques. By far the most common is principal components analysis. As we have seen, PCA provides n eigenvectors derived from a. nx n dispersion matrix of variances and covariances, or correlations. If the data are standardized prior to eigenvector analysis, then the variance-covariance matrix becomes the correlation matrix [see Equation (25) in Chapter 1, with Ji = 52]. Another technique, strongly related to PCA, is factor analysis. ... [Pg.79]

As the analytical data are all in the same units and cover a similar range of magnitude, standardization is not required either and the variance-covariance matrix will be used as the dispersion matrix. [Pg.84]

Table 8 The variance-covariance matrix for the MS data and the eigenvalues and eigenvectors extracted from this... Table 8 The variance-covariance matrix for the MS data and the eigenvalues and eigenvectors extracted from this...

See other pages where Data variance-covariance matrix is mentioned: [Pg.102]    [Pg.514]    [Pg.161]    [Pg.24]    [Pg.25]    [Pg.6]    [Pg.264]    [Pg.21]    [Pg.246]    [Pg.139]    [Pg.335]    [Pg.24]    [Pg.25]    [Pg.500]    [Pg.38]    [Pg.68]    [Pg.72]    [Pg.72]    [Pg.76]    [Pg.11]    [Pg.364]    [Pg.350]    [Pg.352]    [Pg.8]    [Pg.67]    [Pg.85]   
See also in sourсe #XX -- [ Pg.139 ]




SEARCH



Covariance

Covariance matrix

Covariant

Covariates

Covariation

Data matrix

Data variance

Variance matrix

Variance-covariance

Variance-covariance matrix

© 2024 chempedia.info