Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Correlation Coefficient

The equation of the straight line determined from a least squares fit procedure for an experiment in which the instrument readout, R, was plotted on the y-axis and the concentration, C, in parts per million was plotted on the x-axis is [Pg.162]

What is the analyte concentration in the unknown if the instrument readout for the unknown is 0.481 Solution 6.2 [Pg.162]

To test for the significance of an apparently linear relation we calculate the correlation coefficient r defined by (S denotes summation over all pairs of observations) [Pg.57]

We will illustrate the use of the statistical treatment of data suspected of being expressed in the form y = a -t- bx with the data partially (there are actually 30 pairs of observations) given by [Pg.57]

We obtain the following terms (n = number of pairs of observations). [Pg.58]

We see from the table for r (Table IV in the Appendix) that the probability of getting such a value for r in the absence of any correlation is about 0.10, that Is to say 1 in 10 times we could get as large a value for r as we did here even in the sence of a correlation. The evidence for the correlation is thus inadequate. [Pg.58]


The off-diagonal elements of the variance-covariance matrix represent the covariances between different parameters. From the covariances and variances, correlation coefficients between parameters can be calculated. When the parameters are completely independent, the correlation coefficient is zero. As the parameters become more correlated, the correlation coefficient approaches a value of +1 or -1. [Pg.102]

For the acetone-methanol data of Othmer, the correlation coefficient is -0.678, indicating a moderate degree of correlation between the two van Laar parameters. The elongated confidence ellipses shown in Figure 2 further emphasize this correlation. [Pg.104]

Some variables often have dependencies, such as reservoir porosity and permeability (a positive correlation) or the capital cost of a specific equipment item and its lifetime maintenance cost (a negative correlation). We can test the linear dependency of two variables (say x and y) by calculating the covariance between the two variables (o ) and the correlation coefficient (r) ... [Pg.165]

A Monte Carlo simulation is fast to perform on a computer, and the presentation of the results is attractive. However, one cannot guarantee that the outcome of a Monte Carlo simulation run twice with the same input variables will yield exactly the same output, making the result less auditable. The more simulation runs performed, the less of a problem this becomes. The simulation as described does not indicate which of the input variables the result is most sensitive to, but one of the routines in Crystal Ball and Risk does allow a sensitivity analysis to be performed as the simulation is run.This is done by calculating the correlation coefficient of each input variable with the outcome (for example between area and UR). The higher the coefficient, the stronger the dependence between the input variable and the outcome. [Pg.167]

Figure C2.5.8. Plot of the folding times Tp as a fimction of cr nfor tlie 22 sequences. This figure shows tlrat under tire external conditions when tire NBA is tire most populated tlrere is a remarkable correlation between ip and The correlation coefficient is 0.94. It is clear tlrat over a four orders of magnitude of folding times Xp = expf-a, / Oq) where CTq is a constant. The filled and open circles correspond to different contact interactions used in C2.5.1. The open squares are for A = 36. Figure C2.5.8. Plot of the folding times Tp as a fimction of cr nfor tlie 22 sequences. This figure shows tlrat under tire external conditions when tire NBA is tire most populated tlrere is a remarkable correlation between ip and The correlation coefficient is 0.94. It is clear tlrat over a four orders of magnitude of folding times Xp = expf-a, / Oq) where CTq is a constant. The filled and open circles correspond to different contact interactions used in C2.5.1. The open squares are for A = 36.
Fig. 2. The contribution c<, of the CSP approximation to the Cl wavefunction and the correlation coefficients d jaj/0 versus time. Fig. 2. The contribution c<, of the CSP approximation to the Cl wavefunction and the correlation coefficients d jaj/0 versus time.
By applying this method, they demonstrated that the removal of insignificant variables increases the quality and reliability of the models despite the fact that the correlation coefficient, r, always decreases, although only shghtly. For example, the characteristics of a model with six orthogonalized descriptors were r = 0.99288, s = 0.9062, F = 127.4 and the quality of this model was sufficiently improved after removal of the two least significant descriptors, to r = 0.9925, s = 0.8553,... [Pg.207]

The idea behind this approach is simple. First, we compose the characteristic vector from all the descriptors we can compute. Then, we define the maximum length of the optimal subset, i.e., the input vector we shall actually use during modeling. As is mentioned in Section 9.7, there is always some threshold beyond which an inaease in the dimensionality of the input vector decreases the predictive power of the model. Note that the correlation coefficient will always be improved with an increase in the input vector dimensionality. [Pg.218]

The magnitude of dependencies in the variables is determined by the correlation coefficient. The correlation coefficient according to Pearson is given by Eq. (1). [Pg.444]

The value of the correlation coefficient ranges from r = -1 to r = +1. In those cases where r =l, the data are completely correlated, either positively or negatively (see Figure 9-4). The smaller the absolute value of r, the lower is... [Pg.444]

If the correlation coefficient is r = -1 high values on thejr-axis are associated with low values on the y-anis, The relationship is negatively linear. [Pg.445]

If the r and y values are totally independent of each other the correlation coefficient is r = 0,... [Pg.445]

Figure 9-4. Correlation analysis examples of different values of the correlation coefficient. Figure 9-4. Correlation analysis examples of different values of the correlation coefficient.
An application of correlation analysis is the detection of related chemical de.scriptors when analyzing chemical data, correlation analysis should be used as a first step to identify those descriptors which are interrelated. 1 f two descriptors are strongly correlated, i.e, the correlation coefficient of two descriptors exceeds a certain value, e.g., r > 0.90, one of the descriptors can be excluded from the data set. [Pg.445]

Correlation analysis reveals the interdependence between variables. The statistical measure for the interdependence is the correlation coefficient. [Pg.481]

These first components of the autocorrelation coefficient of the seven physicochemical properties were put together with the other 15 descriptors, providing 22 descriptors. Pairwise correlation analysis was then performed a descriptor was eliminated if the correlation coefficient was equal or higher than 0.90, and four descriptors (molecular weight, the number of carbon atoms, and the first component of the 2D autocorrelation coefficient for the atomic polarizability and n-charge) were removed. This left 18 descriptors. [Pg.499]

The correlation coefficient for this equation was 0.994. Such a paraboHc dependence of activity on the partition coefficient may reflect partitioning of the dmg through several membrane barriers, which enabled the dmg to reach its site of action. [Pg.273]

If the normalized method is used in addition, the value of Sjj is 3.8314 X 10 /<3 , where <3 is the variance of the measurement of y. The values of a and h are, of course, the same. The variances of a and h are <3 = 0.2532C , cf = 2.610 X 10" <3 . The correlation coefficient is 0.996390, which indicates that there is a positive correlation between x and y. The small value of the variance for h indicates that this parameter is determined very well by the data. The residuals show no particular pattern, and the predictions are plotted along with the data in Fig. 3-58. If the variance of the measurements of y is known through repeated measurements, then the variance of the parameters can be made absolute. [Pg.502]

Figure 2.15(a) shows the relationship between and Cp for the component characteristics analysed. Note, there are six points at q = 9, Cp = 0. The correlation coefficient, r, between two sets of variables is a measure of the degree of (linear) association. A correlation coefficient of 1 indicates that the association is deterministic. A negative value indicates an inverse relationship. The data points have a correlation coefficient, r = —0.984. It is evident that the component manufacturing variability risks analysis is satisfactorily modelling the occurrence of manufacturing variability for the components tested. [Pg.57]

The above process above could also be performed for the 3-parameter Weibull distribution to compare the correlation coefficients and determine the better fitting distributional model. Computer-based techniques have been devised as part of the approach to support businesses attempting to determine the characterizing distributions... [Pg.147]

The Hammett equation is said to be followed when a plot of log k against a is linear. Most workers take as the criterion of linearity the correlation coefficient r, which is required to be at least 0.95 and preferably above 0.98. A weakness of r as a statistical measure of goodness of fit is that r is a function of the slope p if the slope is zero, the correlation coefficient is zero. A slope of zero in an LEER is a chemically informative result, for it demonstrates an absence of a substituent... [Pg.318]

The correlation coefficient ranges between 1 and — 1. A perfect positive correlation has r=l, no correlation at all is r = 0, and a perfect negative correlation r= —1. Some examples of correlations are shown in Figure 11.5b. A measure of the significance of a relationship between two variables can be gained by calculating a value of t ... [Pg.231]

Although the correlation coefficient r would easily be calculated with the aid of a modern calculator or computer package, the following example will show how the value of r can be obtained. [Pg.144]

Example 9. Quinine may be determined by measuring the fluorescence intensity in 1M H2S04 solution (Section 18.4). Standard solutions of quinine gave the following fluorescence values. Calculate the correlation coefficient r. [Pg.144]

Once a linear relationship has been shown to have a high probability by the value of the correlation coefficient (r), then the best straight line through the data points has to be estimated. This can often be done by visual inspection of the calibration graph but in many cases it is far better practice to evaluate the best straight line by linear regression (the method of least squares). [Pg.145]

APPENDIX 11 CHARACTERISTIC INFRARED ABSORPTION BANDS 839 APPENDIX 12 PERCENTAGE POINTS OF THE f-DISTRIBUTION 840 APPENDIX 13 / -DISTRIBUTION 841 APPENDIX 14 CRITICAL VALUES OF 0 (/> = 0.05) 842 APPENDIX 15 CRITICAL VALUES OF THE CORRELATION COEFFICIENT p (P = 0.05) 842... [Pg.900]

Table 2 also contains the correlation coefficient, r, for each K . If the predicted concentrations for a data set exactly matched the expected concentrations, r would equal 1.0. If there were absolutely no relationship between the predicted and expected concentrations, r would equal 0.0. [Pg.61]

The use of a computer is very helpful to carry out a direct processing of the raw experimental data and to calculate the correlation coefficient and the least squares estimate of the rate constant. [Pg.59]

Jaffe (1953)52 showed that while many rate or equilibrium data conform well to the Hammett equation (as indicated by the correlation coefficient), many such data are outside the scope of the equation in its original form and mode of application. Deviations are commonly shown by para-substituents with considerable + Rot — R effect53. Hammett himself found that p-NOz (+ R) showed deviations in the correlation of reactions of anilines or phenols. The deviations were systematic in that a a value of ca. 1.27 seemed to apply, compared with 0.78 based on the ionization of p-nitrobenzoic acid. Other examples were soon discovered and it became conventional to treat them similarly in terms of a duality of substituent constants . [Pg.495]


See other pages where The Correlation Coefficient is mentioned: [Pg.679]    [Pg.218]    [Pg.219]    [Pg.490]    [Pg.390]    [Pg.393]    [Pg.273]    [Pg.246]    [Pg.11]    [Pg.503]    [Pg.146]    [Pg.252]    [Pg.171]    [Pg.262]    [Pg.66]    [Pg.319]    [Pg.231]    [Pg.241]    [Pg.329]    [Pg.105]    [Pg.842]    [Pg.141]   


SEARCH



Coefficient correlation

Interval for the Correlation Coefficient

Limits on Activity Coefficient Correlations, the Gibbs-Duhem Equation

Square of the correlation coefficient

The product-moment correlation coefficient

© 2024 chempedia.info