Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Variance-Covariance Matrix

The X matrix is the p x p variance-covariance matrix, which is a measure of the degree of scatter in the multivariate distribution. [Pg.53]

The variance and covariance terms aft and crT in the variance-covariance matrix are given by Equation 3.25 and Equation 3.26, respectively. [Pg.53]

Note that the matrix is symmetrical about the diagonal variances appear on the diagonal and covariances appear on the off-diagonal. If we were to neglect the covariance terms from the variance-covariance matrix, any resulting statistical analysis that employed it would be equivalent to a univariate analysis in which we consider each variable one at a time. At the beginning of the chapter we noted that considering all variables simultaneously yields more information, and here we see that it is precisely the covariance terms of the variance-covariance matrix that encodes this extra information. [Pg.53]

Having described squared distances and the variance-covariance matrix, we are now in a position to introduce the multivariate normal distribution, which is represented in Equation 3.27, [Pg.53]

The variance-covariance matrix can be normalized to give the matrix of correlation coefficients between variables. Recall that the correlation coefficient is the cosine of the angle, (j , between two vectors. Because the correlation of any variable with itself is always perfect (py = 1), the diagonal elements of the correlation matrix, R, are always 1.00. [Pg.54]

In Section 6.2, the standard uncertainty of the parameter estimate was obtained by taking the square root of the product of the purely experimental uncertainty variance estimate, and the matrix (see Equation 6.3). A single number was [Pg.119]

For the general, multiparameter case, the product of the purely experimental uncertainty estimate, and the matrix gives the estimated [Pg.119]

Each of the upper left to lower right diagonal elements of V is an estimated variance of a parameter estimate, si, these elements correspond to the parameters as they appear in the model from left to right. Each of the off-diagonal elements is an estimated covariance between two of the parameter estimates [Dunn and Clark (1987)]. [Pg.119]

for a single-parameter model such as y,j = p + r,j, the estimated variance-covariance matrix contains no covariance elements the square root of the single variance element corresponds to the standard uncertainty of the single parameter estimate. [Pg.119]

In this chapter, we will examine the variance-covariance matrix to see how the location of experiments in factor space (i.e., the experimental design) affects the individual variances and covariances of the parameter estimates. Throughout this section we will be dealing with the specific two-parameter first-order model y, = Pq + + li only the resulting principles are entirely general, however, and can be [Pg.119]


The off-diagonal elements of the variance-covariance matrix represent the covariances between different parameters. From the covariances and variances, correlation coefficients between parameters can be calculated. When the parameters are completely independent, the correlation coefficient is zero. As the parameters become more correlated, the correlation coefficient approaches a value of +1 or -1. [Pg.102]

The important underlying components of protein motion during a simulation can be extracted by a Principal Component Analysis (PGA). It stands for a diagonalization of the variance-covariance matrix R of the mass-weighted internal displacements during a molecular dynamics simulation. [Pg.73]

Define the variance-covariance matrix for this vector to be Q = B (BJB B... [Pg.2572]

A special form of cross-product matrix is the variance-covariance matrix (or covariance matrix for short) Cp, which is based on the column-centered matrix Yp derived from an original matrix X ... [Pg.49]

As stated earlier, LDA requires that the variance-covariance matrices of the classes being considered can be pooled. This is only so when these matrices can be considered to be equal, in the same way that variances can only be pooled, when they are considered equal (see Section 2.1.4.4). Equal variance-covariance means that the 95% confidence ellipsoids have an equal volume (variance) and orientation in space (covariance). Figure 33.10 illustrates situations of unequal variance or covariance. Clearly, Fig. 33.1 displays unequal variance-covariance, so that one must expect that QDA gives better classification, as is indeed the case (Fig. 33.2). When the number of objects is smaller than the number of variables m, the variance-covariance matrix is singular. Clearly, this problem is more severe for QDA (which requires m < n ) than for LDA, where the variance-covariance matrix is pooled and therefore the number of objects N is the sum of all objects... [Pg.222]

One expects that during the measurement-prediction cycle the confidence in the parameters improves. Thus, the variance-covariance matrix needs also to be updated in each measurement-prediction cycle. This is done as follows [1] ... [Pg.578]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

The sequence of the innovation, gain vector, variance-covariance matrix and estimated parameters of the calibration lines is shown in Figs. 41.1-41.4. We can clearly see that after four measurements the innovation is stabilized at the measurement error, which is 0.005 absorbance units. The gain vector decreases monotonously and the estimates of the two parameters stabilize after four measurements. It should be remarked that the design of the measurements fully defines the variance-covariance matrix and the gain vector in eqs. (41.3) and (41.4), as is the case in ordinary regression. Thus, once the design of the experiments is chosen... [Pg.580]

Fig. 41.3. Evolution of the diagonal elements of the variance-covariance matrix (P) during the estimation process (see Table 41.1). Fig. 41.3. Evolution of the diagonal elements of the variance-covariance matrix (P) during the estimation process (see Table 41.1).
Influence of initial values of the diagonal values of the variance-covariance matrix P(0) and the variance of the experimental error on the gain vector and the Innovation sequence (see Table 41.1 for the experimental values, > )... [Pg.584]

The set of selected wavelengths (i.e. the experimental design) affects the variance-covariance matrix, and thus the precision of the results. For example, the set 22, 24 and 26 (Table 41.5) gives a less precise result than the set 22, 32 and 24 (Table 41.7). The best set of wavelengths can be derived in the same way as for multiple linear regression, i.e. the determinant of the dispersion matrix (h h) which contains the absorptivities, should be maximized. [Pg.587]

Equations (41.15) and (41.19) for the extrapolation and update of system states form the so-called state-space model. The solution of the state-space model has been derived by Kalman and is known as the Kalman filter. Assumptions are that the measurement noise v(j) and the system noise w(/) are random and independent, normally distributed, white and uncorrelated. This leads to the general formulation of a Kalman filter given in Table 41.10. Equations (41.15) and (41.19) account for the time dependence of the system. Eq. (41.15) is the system equation which tells us how the system behaves in time (here in j units). Equation (41.16) expresses how the uncertainty in the system state grows as a function of time (here in j units) if no observations would be made. Q(j - 1) is the variance-covariance matrix of the system noise which contains the variance of w. [Pg.595]

Element k, l of the variance-covariance matrix for the polynomial parameters... [Pg.16]

Use the variance-covariance matrix below as a measure of the variability (and reliability) of the stream measurements ... [Pg.578]

Eq. (53) provides the variance-covariance matrix for the parameter ratios. Letting by, b2, and b3 denote the respective ratios of Eq. (59), then Eq. (37) may be applied to find... [Pg.127]

The effect on the variance-covariance matrix of two experiments located at different positions in factor space can be investigated by locating one experiment at X, = 1 and varying the location of the second experiment. The first row of the matrix of parameter coefficients for the model y,- = + p,jc, + r, can be made to... [Pg.120]

Consideration of the effect of experimental design on the elements of the variance-covariance matrix leads naturally to the area of optimal design [Box, Hunter, and Hunter (1978), Evans (1979), and Wolters and Kateman (1990)]. Let us suppose that our purpose in carrying out two experiments is to obtain good estimates of the intercept and slope for the model yj, = Po + Pi i, + r,. We might want to know what levels of the factor x , we should use to obtain the most precise estimates of po and... [Pg.126]

Calculate the variance-covariance matrix associated with the straight line relationship y, = Po + PiA i, + r, for the following data (see Section 11.2 for a definition of D) ... [Pg.129]

Equation 7.1 is one of the most important relationships in the area of experimental design. As we have seen in this chapter, the precision of estimated parameter values is contained in the variance-covariance matrix V the smaller the elements of V, the more precise will be the parameter estimates. As we shall see in Chapter 11, the precision of estimating the response surface is also directly related to V the smaller the elements of V, the less fuzzy will be our view of the estimated surface. [Pg.130]


See other pages where The Variance-Covariance Matrix is mentioned: [Pg.102]    [Pg.514]    [Pg.714]    [Pg.2571]    [Pg.2572]    [Pg.156]    [Pg.165]    [Pg.98]    [Pg.99]    [Pg.61]    [Pg.62]    [Pg.79]    [Pg.148]    [Pg.228]    [Pg.578]    [Pg.579]    [Pg.579]    [Pg.581]    [Pg.581]    [Pg.582]    [Pg.586]    [Pg.24]    [Pg.25]    [Pg.6]    [Pg.416]    [Pg.169]    [Pg.264]    [Pg.82]    [Pg.119]   


SEARCH



Covariance

Covariance matrix

Covariant

Covariates

Covariation

Matrix, The

Variance matrix

Variance-covariance

Variance-covariance matrix

© 2024 chempedia.info