Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sample covariance

In this case, V is the classical sample covariance matrix of z. [Pg.209]

The definition of the sample covariance is also important. Given a set of N pairs of... [Pg.276]

Variances and covariances can be lumped together into the n x n symmetric sample covariance or dispersion matrix S (or ) with current element siU2 such that... [Pg.204]

Likewise, the sample covariance matrix between two vectors x and y would be... [Pg.204]

The sample covariance matrices are likewise related through the following relationship... [Pg.229]

Both S, by definition, and F through equation (4.4.12), are centered, i.e., their expectation is a null matrix. Therefore the sample covariance matrix between the reduced data and the components is... [Pg.241]

This calculation is also equivalent to r = sample covariance/ (SxSy), as was seen earlier under ANCOVA. [Pg.936]

The basis for calculating the correlation between two variables xj and xk is the covariance covariance matrix (dimension m x m), which is a quadratic, symmetric matrix. The cases j k (main diagonal) are covariances between one and the same variable, which are in fact the variances o-jj of the variables Xj for j = 1,..., m (note that in Chapter 1 variances were denoted as variance—covariance matrix (Figure 2.7). Matrix X refers to a data population of infinite size, and should not be confused with estimations of it as described in Section 2.3.2, for instance the sample covariance matrix C. [Pg.53]

The classical measure of covariance between two variables Xj and xk is the sample covariance, Cjk, defined by... [Pg.55]

FIGURE 2.9 Basic statistics of multivariate data and covariance matrix. xT, transposed mean vector vT, transposed variance vector vXOtal. total variance (sum of variances vb. .., vm). C is the sample covariance matrix calculated from mean-centered X. [Pg.55]

Based on the definition of the covariance cjk in Equation 2.9, the sample covariance matrix C can be calculated for mean-centered X by... [Pg.56]

The classical correlation coefficient is the Pearson correlation coefficient, (rik, r) which is according to Equation 2.7 the sample covariance, standardized by the standard deviations, v and sk of the variables. [Pg.56]

A more robust correlation measure, -y Vt, can be derived from a robust covariance estimator such as the minimum covariance determinant (MCD) estimator. The MCD estimator searches for a subset of h observations having the smallest determinant of their classical sample covariance matrix. The robust location estimator—a robust alternative to the mean vector—is then defined as the arithmetic mean of these h observations, and the robust covariance estimator is given by the sample covariance matrix of the h observations, multiplied by a factor. The choice of h determines the robustness of the estimators taking about half of the observations for h results in the most robust version (because the other half of the observations could be outliers). Increasing h leads to less robustness but higher efficiency (precision of the estimators). The value 0.75n for h is a good compromise between robustness and efficiency. [Pg.57]

It is a distance measure that accounts for the covariance structure, here estimated by the sample covariance matrix C. Clearly, one could also take a robust covariance estimator. The Mahalanobis distance can also be computed from each observation to the data center, and the formula changes to... [Pg.60]

For identifying outliers, it is crucial how center and covariance are estimated from the data. Since the classical estimators arithmetic mean vector x and sample covariance matrix C are very sensitive to outliers, they are not useful for the purpose of outlier detection by taking Equation 2.19 for the Mahalanobis distances. Instead, robust estimators have to be taken for the Mahalanobis distance, like the center and... [Pg.61]

Note that since SVD is based on eigenvector decompositions of cross-product matrices, this algorithm gives equivalent results as the Jacobi rotation when the sample covariance matrix C is used. This means that SVD will not allow a robust PCA solution however, for Jacobi rotation a robust estimation of the covariance matrix can be used. [Pg.87]

The first PLS component is found as follows Since we deal with the sample covariance, the maximization problem (Equation 4.67) can be written as maximization of... [Pg.170]

The solutions for the loading vectors pj and qj are found by solving two eigen vector/eigenvalue problems. Let Sx= cov(X), SY= cov(Y), and SXY= cov(X, Y) be the sample covariance matrices of X and Y, and the sample covariance matrix between X and Y (a matrix mx x mY containing the covariances between all x- and all y-variables), respectively. Also other covariance measures could be considered, see Section 2.3.2. Then the solutions are (Johnson and Wichem 2002)... [Pg.178]

The sample means are (1/100) times the elements in the first column of X X. The sample covariance matrix for the three regressors is obtained as (1/99)[(X X) y -100 xixj ]. [Pg.10]

Besides the measure of the dispersion of the one-dimensional projection i.e. the projective index, another distinction of PP PCA from the classical PCA is the procedure of computation. Since the projective index is the quadratic form of X as stated above, the extremal problem of Eqn. 1 can be turned into the problem of finding the eigenvalues and eigenvectors of the sample covariance matrix for which a lot of algorithms such as SVD, QR are available. Because of the adoption of the robust projective index in PP PCA, some nonlinear optimization approaches should be used. In order to guarantee the global optimum. Simulated annealing (SA) is adopted which is the main topic of this book. [Pg.63]

Partial least squares (PLS) regression, develops a biased regression model between X and Y. In the context of chemical process operations, usually X denotes the process variables and Y the quality variables. PLS selects latent variables so that variation in X which is most predictive of the product quality data Y is extracted. PLS works on the sample covariance matrix (X Y)(Y X) [86, 87, 111, 172, 188, 334, 338[. Measurements of m process variables taken at n different times are arranged into a (n x m) process data matrix X. The q quality variables are given by the corresponding... [Pg.79]

Within some individual samples, including both early and late calcites. Mg and Fe display a positive covariation that may be related to zoning (Fig. 6). If analogy can be made to the between-sample covariation, it is possible that this zoning relates to a temporal trend of relatively high contents of both Mg and Fe in early precipitates and lower concentrations in later ones however, there is no direct petrographic evidence to support the existence of this trend. Similar positive covariations between Mg and Fe have been reported in other studies of carbonate cementation in sandstones (e.g. Prosser et al., 1993 Milliken et ai, this volume). [Pg.96]


See other pages where Sample covariance is mentioned: [Pg.215]    [Pg.276]    [Pg.204]    [Pg.216]    [Pg.217]    [Pg.220]    [Pg.85]    [Pg.85]    [Pg.140]    [Pg.150]    [Pg.170]    [Pg.170]    [Pg.176]    [Pg.214]    [Pg.63]    [Pg.63]    [Pg.68]    [Pg.68]    [Pg.64]    [Pg.42]    [Pg.196]    [Pg.257]    [Pg.63]    [Pg.63]    [Pg.68]    [Pg.68]    [Pg.39]   
See also in sourсe #XX -- [ Pg.184 , Pg.256 ]

See also in sourсe #XX -- [ Pg.184 , Pg.256 ]

See also in sourсe #XX -- [ Pg.39 ]




SEARCH



Covariance

Covariant

Covariates

Covariation

© 2024 chempedia.info