Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariance estimated

Only a few publications in the literature have dealt with this problem. Almasy and Mah (1984) presented a method for estimating the covariance matrix of measured errors by using the constraint residuals calculated from available process data. Darouach et al. (1989) and Keller et al. (1992) have extended this approach to deal with correlated measurements. Chen et al. (1997) extended the procedure further, developing a robust strategy for covariance estimation, which is insensitive to the presence of outliers in the data set. [Pg.203]

The covariance matrix of measurement errors is a very useful statistical property. Indirect methods can deal with unsteady sampling data, but unfortunately they are very sensitive to outliers and the presence of one or two outliers can cause misleading results. This drawback can be eliminated by using robust approaches via M-estimators. The performance of the robust covariance estimator is better than that of the indirect methods when outliers are present in the data set. [Pg.214]

In real practice, the location m and the variance have to be estimated from real data. An iterative algorithm, similar to the one used in Chapter 10 for the robust covariance estimation, is used to calculate the trust function. The main advantage of using this algorithm is that the convergence is warranted. [Pg.235]

A more robust correlation measure, -y Vt, can be derived from a robust covariance estimator such as the minimum covariance determinant (MCD) estimator. The MCD estimator searches for a subset of h observations having the smallest determinant of their classical sample covariance matrix. The robust location estimator—a robust alternative to the mean vector—is then defined as the arithmetic mean of these h observations, and the robust covariance estimator is given by the sample covariance matrix of the h observations, multiplied by a factor. The choice of h determines the robustness of the estimators taking about half of the observations for h results in the most robust version (because the other half of the observations could be outliers). Increasing h leads to less robustness but higher efficiency (precision of the estimators). The value 0.75n for h is a good compromise between robustness and efficiency. [Pg.57]

It is a distance measure that accounts for the covariance structure, here estimated by the sample covariance matrix C. Clearly, one could also take a robust covariance estimator. The Mahalanobis distance can also be computed from each observation to the data center, and the formula changes to... [Pg.60]

The Mahalanobis distance used for multivariate outlier detection relies on the estimation of a covariance matrix (see Section 2.3.2), in this case preferably a robust covariance matrix. However, robust covariance estimators like the MCD estimator need more objects than variables, and thus for many applications with m>n this approach is not possible. For this situation, other multivariate outlier detection techniques can be used like a method based on robustified principal components (Filzmoser et al. 2008). The R code to apply this method on a data set X is as follows ... [Pg.64]

Finally, we allow for cross sectional correlation of the disturbances. Our initial estimate of b is the pooled least squares estimator, 2/3. The estimates of the two variances are. 84444 and. 32222 as before while the cross sectional covariance estimate is... [Pg.60]

Location and covariance estimation in high dimensions PCA (Section 6.5)... [Pg.169]

The goal of robust PCA methods is to obtain principal components that are not influenced much by outliers. A first group of methods is obtained by replacing the classical covariance matrix with a robust covariance estimator, such as the reweighed MCD estimator [45] (Section 6.3.2). Let us reconsider the Hawkins-Bradu-Kass data in p = 4 dimensions. Robust PCA using the reweighed MCD estimator yields the score plot in Figure 6.7b. We now see that the center is correctly estimated in the... [Pg.187]

Unfortunately, the use of these affine equivariant covariance estimators is limited to small to moderate dimensions. To see why, let us again consider the MCD estimator. As explained in Section 6.3.2, if p denotes the number of variables in our data set, the MCD estimator can only be computed if pcovariance matrix of any //-subset has zero determinant. Because h < n, the number of variables p may never be larger than n. A second problem is the computation of these robust estimators in high dimensions. Today s fastest algorithms can handle up to about 100 dimensions, whereas there are fields like chemometrics that need to analyze data with dimensions in the thousands. Moreover the accuracy of the algorithms decreases with the dimension p, so it is recommended that small data sets not use the MCD in more than 10 dimensions. [Pg.188]

Another approach to robust PCA has been proposed by Hubert et al. [52] and is called ROBPCA. This method combines ideas of both projection pursuit and robust covariance estimation. The projection pursuit part is used for the initial dimension reduction. Some ideas based on the MCD estimator are then applied to this lowerdimensional data space. Simulations have shown that this combined approach yields more accurate estimates than the raw projection pursuit algorithm RAPCA. The complete description of the ROBPCA method is quite involved, so here we will only outline the main stages of the algorithm. [Pg.189]

The right-hand value Uf, is used when GREGPLUS is called with LEVEL = 20, whereas (n + mb + 1) is used when LEVEL = 22. LEVEL 20 requires fuller data and gives a fuller covariance analysis it gives expectation estimates of the covariance elements for each data block. LEVEL 22 gives maximum-density (most probable) covariance estimates these are smaller than the expectation values, which are averages over the posterior probability distribution. [Pg.219]

Keywords Data Reconciliation, State Estimation, Covariance Estimation. [Pg.519]

A = (aij)mxg is the matrix for the constraint equations. The algorithm for robust covariance estimation can be implemented as follows. [Pg.191]

The analysis of covariance estimator also has the advantage that its variance is generally lower than that using raw outcomes or simple analysis of change scores. Figure 7.2 takes the case where the variances of baselines and outcomes are equal and plots the variance of the three estimators as a function of the correlation between baseline and outcome. It will be seen that the analysis of covariance estimator is everywhere superior to the other two and that the change-score estimator is actually inferior to raw outcomes (which is a constant whatever the correlation coefficient) unless the correlation is greater than 0.5. [Pg.100]


See other pages where Covariance estimated is mentioned: [Pg.139]    [Pg.13]    [Pg.203]    [Pg.208]    [Pg.208]    [Pg.210]    [Pg.82]    [Pg.176]    [Pg.167]    [Pg.169]    [Pg.173]    [Pg.187]    [Pg.210]    [Pg.225]    [Pg.383]    [Pg.645]    [Pg.972]    [Pg.88]    [Pg.130]    [Pg.130]    [Pg.184]    [Pg.189]    [Pg.189]    [Pg.191]    [Pg.193]    [Pg.100]    [Pg.102]   
See also in sourсe #XX -- [ Pg.119 ]




SEARCH



Covariance

Covariance error estimate

Covariance estimated, between parameter estimates

Covariance estimation methods

Covariance robust estimation

Covariant

Covariates

Covariation

Dependent estimates, estimated covariance between

Estimate covariance

Estimate covariance

Estimated covariance matrix

Generalized covariance models estimation

Robust Covariance Estimator

Variances and covariances of the least-squares parameter estimates

© 2024 chempedia.info