Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariance matrix calculation

Essentially this is equivalent to using (Sf/dk kj instead of (<3f/<3k,) for the sensitivity coefficients. By this transformation the sensitivity coefficients are normalized with respect to the parameters and hence, the covariance matrix calculated using Equation 12.4 yields the standard deviation of each parameter as a percentage of its current value. [Pg.190]

FIGURE 2.9 Basic statistics of multivariate data and covariance matrix. xT, transposed mean vector vT, transposed variance vector vXOtal. total variance (sum of variances vb. .., vm). C is the sample covariance matrix calculated from mean-centered X. [Pg.55]

An anisometry descriptor defined as a function of the eigenvalues, obtained by —> Principal Component Analysis applied to the covariance matrix calculated from the —> molecular matrix M ... [Pg.687]

Inverse covariance matrix calculated from the control group only. [Pg.324]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

The off-diagonal elements of the variance-covariance matrix represent the covariances between different parameters. From the covariances and variances, correlation coefficients between parameters can be calculated. When the parameters are completely independent, the correlation coefficient is zero. As the parameters become more correlated, the correlation coefficient approaches a value of +1 or -1. [Pg.102]

Principal component analysis (PCA) takes the m-coordinate vectors q associated with the conformation sample and calculates the square m X m matrix, reflecting the relationships between the coordinates. This matrix, also known as the covariance matrix C, is defined as... [Pg.87]

The power algorithm [21] is the simplest iterative method for the calculation of latent vectors and latent values from a square symmetric matrix. In contrast to NIPALS, which produces an orthogonal decomposition of a rectangular data table X, the power algorithm decomposes a square symmetric matrix of cross-products X which we denote by C. Note that Cp is called the column-variance-covariance matrix when the data in X are column-centered. [Pg.138]

Calculated gain vector, variance covariance matrix and estimated concentrations (see Table 41.5 for the starting conditions)... [Pg.588]

The elements of the covariance matrix of the parameter estimates are calculated when the minimization algorithm has converged with a zero value for the Marquardt s directional parameter. The covariance matrix of the parameters COV(k) is determined using Equation 11.1 where the degrees of freedom used... [Pg.257]

As already discussed in Chapter 11, matrix A calculated during each iteration of the Gauss-Newton method can be used to determine the covariance matrix of the estimated parameters, which in turn provides a measure of the accuracy of the parameter estimates (Tan and Kalogerakis, 1992). [Pg.376]

It should be emphasized that for Markovian copolymers a knowledge of the values of structural parameters of such a kind will suffice to find the probability of any sequence Uk, i.e. for an exhaustive description of the microstructure of the chains of these copolymers with a given average composition. As for the composition distribution of Markovian copolymers, this obeys for any fraction of Z-mers the Gaussian formula whose covariance matrix elements are Dap/l where Dap depend solely on the values of structural parameters [2]. The calculation of their dependence on time, and the stoichiometric and kinetic parameters of the reaction system permits a complete statistical description of the chemical structure of Markovian copolymers to be accomplished. The above reasoning reveals to which extent the mathematical modeling of the processes of the copolymer synthesis is easier to perform provided the alternation of units in macromolecules is known to obey Markovian statistics. [Pg.167]

Statistical properties of a data set can be preserved only if the statistical distribution of the data is assumed. PCA assumes the multivariate data are described by a Gaussian distribution, and then PCA is calculated considering only the second moment of the probability distribution of the data (covariance matrix). Indeed, for normally distributed data the covariance matrix (XTX) completely describes the data, once they are zero-centered. From a geometric point of view, any covariance matrix, since it is a symmetric matrix, is associated with a hyper-ellipsoid in N dimensional space. PCA corresponds to a coordinate rotation from the natural sensor space axis to a novel axis basis formed by the principal... [Pg.154]

Furthermore, under symplectic transformations, it is relatively easy to show, using the Hessian formula for calculating the Fisher information matrix, that the measurement covariance matrix transforms as... [Pg.280]

The LMIPDA-IMM calculations are performed for all combinations of revisit times in A and waveforms in the library. Evidently then the number of combinations grows exponentially in the number of steps ahead, and soon becomes impractical for implementation. Having obtained the error covariance matrix for all possible combinations of sensor modes, the optimal sensor mode (waveform) is then chosen for each target to be the one which gives the longest re-visit time, while constraining the absolute value of the determinant of the error covariance matrix to be smaller than the prescribed upper limit K. In other words, our objective is... [Pg.289]

Note The interpretation of the matrix as the covariance matrix of the errors in x has important applications. The value of any estimate is greatly enhanced if its accuracy is known. is also very useful in initial design and development, as it can be calculated before the estimator is implemented. can be used to study measurement placement and what type and accuracy of information is actually needed. A... [Pg.121]

The previous approach for solving the reconciliation problem allows the calculation, in a systematic recursive way, of the residual covariance matrix after a measurement is added or deleted from the original adjustment. A combined procedure can be devised by using the sequential treatment of measurements together with the sequential processing of the constraints. [Pg.137]

Enew can be calculated as function of B, and the covariance matrix for the old case, Eold, by the following expression ... [Pg.151]

The solution of the minimization problem again simplifies to updating steps of a static Kalman filter. For the linear case, matrices A and C do not depend on x and the covariance matrix of error can be calculated in advance, without having actual measurements. When the problem is nonlinear, these matrices depend on the last available estimate of the state vector, and we have the extended Kalman filter. [Pg.161]

Only a few publications in the literature have dealt with this problem. Almasy and Mah (1984) presented a method for estimating the covariance matrix of measured errors by using the constraint residuals calculated from available process data. Darouach et al. (1989) and Keller et al. (1992) have extended this approach to deal with correlated measurements. Chen et al. (1997) extended the procedure further, developing a robust strategy for covariance estimation, which is insensitive to the presence of outliers in the data set. [Pg.203]

The indirect method uses Eq. (10.9) to estimate F. This procedure requires the value of the covariance matrix, , which can be calculated from the residuals using the balance equations and the measurements. [Pg.204]

The performances of the indirect conventional methods described previously are very sensitive to outliers, so they are not robust. The main reason for this is that they use a direct method to calculate the covariance matrix of the residuals (). If outliers are present in the sampling data, the assumption about the error distribution will be... [Pg.208]

Four samples from a Polynesian island gave the lead isotope compositions given in Table 4.3. Calculate the mean and standard deviation vectors, the covariance matrix and the correlation coefficient between the two isotope ratios. [Pg.205]


See other pages where Covariance matrix calculation is mentioned: [Pg.264]    [Pg.245]    [Pg.338]    [Pg.301]    [Pg.264]    [Pg.245]    [Pg.338]    [Pg.301]    [Pg.514]    [Pg.165]    [Pg.479]    [Pg.579]    [Pg.581]    [Pg.588]    [Pg.178]    [Pg.14]    [Pg.305]    [Pg.15]    [Pg.416]    [Pg.70]    [Pg.278]    [Pg.112]    [Pg.155]    [Pg.155]    [Pg.283]    [Pg.135]    [Pg.202]    [Pg.210]    [Pg.234]   
See also in sourсe #XX -- [ Pg.89 , Pg.90 ]




SEARCH



Covariance

Covariance matrix

Covariant

Covariates

Covariation

Matrix calculations

© 2024 chempedia.info