Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Correlation covariance

In general, a covariate distribution model considers only the covariates influencing the PK and/or PD of the compound of interest. For example, if the covariates age, sex, and weight are identified as important covariates than correlated covariates like height, body mass index, and others might not be incorporated. [Pg.477]

McCabe techniques of variable reduction [McCabe, 1984] are based on the calculation of the residual correlation (or covariance) matrix Sm of the deleted variables where the effect of the retained variable is removed. TTiis matrix is a square symmetric matrix of order p - k, where k is the number of retained variables, obtained from the correlation (covariance) matrix of the retained variables Sr (of size kxk) and the correlation (or covariance) matrix of the deleted variables Sd (of size qxq). The retained variables by McCabe techniques are called principal variables. This terminology can be extended to all the sets of retciined variables obtained from PCA and correlation analysis. [Pg.465]

Another recommendation was to examine the correlation matrix of the covariates prior to the analysis and determine whether any two covariates were correlated. If any two correlated covariates were found to be important predictor variables, one could possibly transform the variables into a composite variable, such as the transformation of height and weight into body surface area or body mass index, or to use only the covariate with the greatest predictive value in the model and not to include the other covariate. An untested approach would be to use principal component analysis and then use one or more of the principal components as the covariate in a model. [Pg.220]

Laboratory tests may also show a high degree of correlation amongst each other. For example, aspartate aminotransferase (AST) is correlated with alanine aminotransferase (ALT) with a correlation coefficient of about 0.6 and total protein is correlated with albumin, also with a correlation coefficient of about 0.6. Caution needs to be exercised when two or more correlated laboratory values enter in the covariate model simultaneously because of the possible collinearity that may occur (Bonate, 1999). Like in the linear regression case, inclusion of correlated covariates may result in an unstable model leading to inflated standard errors and deflated Type I error rate. [Pg.274]

It can be argued that within the distinction between the more methodologically oriented first part and the following appHed part of this chapter the occurrence of NUS in combination with covariance processing in the first part is somewhat arbitrary. The authors acknowledge in this respect that from the topics to foUow, mixture deconvolution, specific long-range J-correlation covariance maps, and pure-shift covariance NMR may prove evenly important and are at present even wider appHed than NUS... [Pg.305]

The off-diagonal elements of the variance-covariance matrix represent the covariances between different parameters. From the covariances and variances, correlation coefficients between parameters can be calculated. When the parameters are completely independent, the correlation coefficient is zero. As the parameters become more correlated, the correlation coefficient approaches a value of +1 or -1. [Pg.102]

Some variables often have dependencies, such as reservoir porosity and permeability (a positive correlation) or the capital cost of a specific equipment item and its lifetime maintenance cost (a negative correlation). We can test the linear dependency of two variables (say x and y) by calculating the covariance between the two variables (o ) and the correlation coefficient (r) ... [Pg.165]

These indicator covariance models allow differentiation of the spatial correlation of high-valued concentrations (cut-off high) and low to background-valued concentrations low). In the particular case study underlying the Figure 3, it was found that high value concentration data were more spatially correlated (due to the plume of pollution) than lower value data. [Pg.117]

It can be shown that all symmetric matrices of the form X X and XX are positive semi-definite [2]. These cross-product matrices include the widely used dispersion matrices which can take the form of a variance-covariance or correlation matrix, among others (see Section 29.7). [Pg.31]

A theorem, which we do not prove here, states that the nonzero eigenvalues of the product AB are identical to those of BA, where A is an nxp and where B is a pxn matrix [3]. This applies in particular to the eigenvalues of matrices of cross-products XX and X which are of special interest in data analysis as they are related to dispersion matrices such as variance-covariance and correlation matrices. If X is an nxp matrix of rank r, then the product X X has r positive eigenvalues in A and possesses r eigenvectors in V since we have shown above that ... [Pg.39]

The matrix Cp contains the variances of the columns of X on the main diagonal and the covariances between the columns in the off-diagonal positions (see also Section 9.3.2.4.4). The correlation matrix Rp is derived from the column-standardized matrix Zp ... [Pg.49]

From Yp and Zp, which have been computed in the previous examples, we obtain the corresponding column-covariance matrix Cp and column-correlation... [Pg.49]

Similar expressions to those in eqs. (29.72) and (29.73) can be derived for the variance-covariances and correlations between the rows of X ... [Pg.50]

Scaling is a very important operation in multivariate data analysis and we will treat the issues of scaling and normalisation in much more detail in Chapter 31. It should be noted that scaling has no impact (except when the log transform is used) on the correlation coefficient and that the Mahalanobis distance is also scale-invariant because the C matrix contains covariance (related to correlation) and variances (related to standard deviation). [Pg.65]

In the following section on preprocessing of the data we will show that column-centering of X leads to an interpretation of the sums of squares and cross-products in in terms of the variances-covariances of the columns of X. Furthermore, cos djj> then becomes the coefficient of correlation between these columns. [Pg.112]

We have seen that PLS regression (covariance criterion) forms a compromise between ordinary least squares regression (OLS, correlation criterion) and principal components regression (variance criterion). This has inspired Stone and Brooks [15] to devise a method in such a way that a continuum of models can be generated embracing OLS, PLS and PCR. To this end the PLS covariance criterion, cov(t,y) = s, s. r, is modified into a criterion T = r. (For... [Pg.342]

We will see that CLS and ILS calibration modelling have limited applicability, especially when dealing with complex situations, such as highly correlated predictors (spectra), presence of chemical or physical interferents (uncontrolled and undesired covariates that affect the measurements), less samples than variables, etc. More recently, methods such as principal components regression (PCR, Section 17.8) and partial least squares regression (PLS, Section 35.7) have been... [Pg.352]

The covariances between the parameters are the off-diagonal elements of the covariance matrix. The covariance indicates how closely two parameters are correlated. A large value for the covariance between two parameter estimates indicates a very close correlation. Practically, this means that these two parameters may not be possible to be estimated separately. This is shown better through the correlation matrix. The correlation matrix, R, is obtained by transforming the co-variance matrix as follows... [Pg.377]

If the inputs are correlated, then the inverse of the covariance matrix does not exist and the OLS coefficients cannot be computed. Even with weakly correlated inputs and a low observations-to-inputs ratio, the covariance matrix can be nearly singular, making the OLS solution extremely sensitive to small changes in the measured data. In such cases, OLS is not appropriate for empirical modeling. [Pg.35]

A study of 398 male and 133 female civil servants in London, England, measured blood pressure, PbB, and serum creatinine concentration the study found no correlation between blood pressure and PbB after adjustment for significant covariates, including sex, age, cigarette smoking, alcohol intake, and body mass index in a stepwise multiple regression analysis (Staessen et al. 1990). [Pg.56]


See other pages where Correlation covariance is mentioned: [Pg.292]    [Pg.220]    [Pg.363]    [Pg.453]    [Pg.439]    [Pg.135]    [Pg.86]    [Pg.352]    [Pg.292]    [Pg.220]    [Pg.363]    [Pg.453]    [Pg.439]    [Pg.135]    [Pg.86]    [Pg.352]    [Pg.714]    [Pg.421]    [Pg.161]    [Pg.162]    [Pg.373]    [Pg.413]    [Pg.91]    [Pg.50]    [Pg.61]    [Pg.340]    [Pg.345]    [Pg.349]    [Pg.375]    [Pg.635]    [Pg.641]    [Pg.324]    [Pg.24]    [Pg.36]    [Pg.172]   
See also in sourсe #XX -- [ Pg.239 , Pg.240 , Pg.241 , Pg.242 , Pg.243 , Pg.244 , Pg.247 , Pg.260 , Pg.266 , Pg.364 ]




SEARCH



Covariance

Covariance Cross correlations

Covariance and Correlation

Covariance, Correlation, and Regression

Covariant

Covariates

Covariates correlation

Covariation

© 2024 chempedia.info