Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Correlation coefficient problem

A Monte Carlo simulation is fast to perform on a computer, and the presentation of the results is attractive. However, one cannot guarantee that the outcome of a Monte Carlo simulation run twice with the same input variables will yield exactly the same output, making the result less auditable. The more simulation runs performed, the less of a problem this becomes. The simulation as described does not indicate which of the input variables the result is most sensitive to, but one of the routines in Crystal Ball and Risk does allow a sensitivity analysis to be performed as the simulation is run.This is done by calculating the correlation coefficient of each input variable with the outcome (for example between area and UR). The higher the coefficient, the stronger the dependence between the input variable and the outcome. [Pg.167]

The known models for describing retention factor in whole variable space ar e based on three-phase model and containing from three to six par ameters and variety combinations of two independent factors (micelle concentration, volume fraction of organic modifier). When the retention models are comparing or the accuracy of fitting establishing, the closeness of correlation coefficient to 1 and the sum of the squared residuals or the sum of absolute deviations and their relative values is taken into account. A number of problems ar e appear in this case ... [Pg.45]

Several doubts about the correctness of the usual statistical treatment were expressed already in the older literature (31), and later, attention was called to large experimental errors (142) in AH and AS and their mutual dependence (143-145). The possibility of an apparent correlation due only to experimental error also was recognized and discussed (1, 2, 4, 6, 115, 116, 119, 146). However, the full danger of an improper statistical treatment was shown only by this reviewer (147) and by Petersen (148). The first correct statistical treatment of a special case followed (149) and provoked a brisk discussion in which Malawski (150, 151), Leffler (152, 153), Palm (3, 154, 155) and others (156-161) took part. Recently, the necessary formulas for a statistical treatment in common cases have been derived (162-164). The heart of the problem lies not in experimental errors, but in the a priori dependence of the correlated quantities, AH and AS. It is to be stressed in advance that in most cases, the correct statistical treatment has not invalidated the existence of an approximate isokinetic relationship however, the slopes and especially the correlation coefficients reported previously are almost always wrong. [Pg.419]

However, it is not proper to apply the regression analysis in the coordinates AH versus AS or AS versus AG , nor to draw lines in these coordinates. The reasons are the same as in Sec. IV.B., and the problem can likewise be treated as a coordinate transformation. Let us denote rcH as the correlation coefficient in the original (statistically correct) coordinates AH versus AG , in which sq and sh are the standard deviations of the two variables from their averages. After transformation to the coordinates TAS versus AG or AH versus TAS , the new correlation coefficients ros and rsH. respectively, are given by the following equations. (The constant T is without effect on the correlation coefficient.)... [Pg.453]

Consequently, the proof of calibration should never be limited to the presentation of a calibration graph and confirmed by the calculation of the correlation coefficient. When raw calibration data are not presented in such a situation, most often a validation study cannot be evaluated. Once again it should be noted that nonlinearity is not a problem. It is not necessary to work within the linear range only. Any other calibration function can be accepted if it is a continuous function. [Pg.104]

Likewise for this example, the Z statistic of 1.5606 corresponding to the upper correlation coefficient confidence limit is shown in the graphic below (Graphic 60) as having a p value of 0.91551 this represents the upper confidence limit for the 0.80 correlation example problem. Finally then, for the example problem the correlation confidence limits are from 0.562575 to 0.91551 (i.e., 0.56 to 0.92). [Pg.395]

The null hypothesis test for this problem is stated as follows are two correlation coefficients rx and r2 statistically the same (i.e., rx = r2)l The alternative hypothesis is then rj r2. If the absolute value of the test statistic Z(n) is greater than the absolute value of the z-statistic, then the null hypothesis is rejected and the alternative hypothesis accepted - there is a significant difference between rx and r2. If the absolute value of Z(n) is less than the z-statistic, then the null hypothesis is accepted and the alternative hypothesis is rejected, thus there is not a significant difference between rx and r2. Let us look at a standard example again (equation 60-22). [Pg.396]

In principle, the relationships described by equations 66-9 (a-c) could be used directly to construct a function that relates test results to sample concentrations. In practice, there are some important considerations that must be taken into account. The major consideration is the possibility of correlation between the various powers of X. We find, for example, that the correlation coefficient of the integers from 1 to 10 with their squares is 0.974 - a rather high value. Arden describes this mathematically and shows how the determinant of the matrix formed by equations 66-9 (a-c) becomes smaller and smaller as the number of terms included in equation 66-4 increases, due to correlation between the various powers of X. Arden is concerned with computational issues, and his concern is that the determinant will become so small that operations such as matrix inversion will be come impossible to perform because of truncation error in the computer used. Our concerns are not so severe as we shall see, we are not likely to run into such drastic problems. [Pg.443]

When it comes to the covariance structure, however, problems become acute. Total inversion requires that a joint probability distribution is known for observations and parameters. This is usually not a problem for observations. The covariance structure among the parameters of the model becomes more obscure how do we estimate the a priori correlation coefficient between age and initial Sr ratio in our isochron example without infringing seriously the objectivity of error assessment When the a priori covariance structure between the observations and the model parameters is estimated, the chances that we actually resort to unsupported and unjustified speculation become immense. Total inversion must be well-understood in order for it not to end up as a formal exercise of consistency between a priori and a posteriori estimates. [Pg.310]

A number of studies reported that several kinetic models can describe rate data well, when based on correlation coefficients and standard errors of the estimates [25,118,131,132]. Despite this, there often is no consistent relation between the equation which gives the best fit and the physicochemical and min-eralogical properties of the adsorbent(s) being studied. Another problem with some of the kinetic equations is that they are empirical and no meaningful rate parameters can be obtained. [Pg.196]

Gifford and Hanna tested their simple box model for particulate matter and sulfur dioxide predictions for annual or seasonal averages against diffusion-model predictions. Their conclusions are summarized in Table 5-3. The correlation coefficient of observed concentrations versus calculated concentrations is generally higher for the simple model than for the detailed model. Hanna calculated reactions over a 6-h period on September 30, 1%9, with his chemically reactive adaptation of the simple dispersion model. He obtained correlation coefficients of observed and calculated concentrations as follows nitric oxide, 0.97 nitrogen dioxide, 0.05 and rhc, 0.55. He found a correlation coefficient of 0.48 of observed ozone concentration with an ozone predictor derived from a simple model, but he pointed out that the local inverse wind speed had a correlation of 0.66 with ozone concentration. He derived a critical wind speed formula to define a speed below which ozone prediction will be a problem with the simple model. Further performance of the simple box model compared with more detailed models is discussed later. [Pg.226]

Sometimes, it is not so easy to convince oneself that the solution of the molecular replacement problem has, in fact, been found, even after rigid-body refinement indeed, the first solution is not always well detached and different scores may produce different rankings. The most commonly used scores are correlation coefficients on either intensities or structure-factor amplitudes, and R-factors. Even though these criteria are formally related (Jamrog et ah, 2004), they can produce different rankings, especially if no solution is clearly detached. Some other criterion is then needed to discriminate between the potential solutions. [Pg.102]

First Control Run. A large number (7 to 15) of sets of standards and blanks are run and the results tabulated, as in the trial runs. These data are then plotted (responses vs. concentration for all data points, on one graph) and the means, standard deviations, RSDs, the slope, y-intercept, and correlation coefficient are determined. The smaller the value of the y-intercept, the better (the less chance for a contamination or interference problem). The closer the slope is to 1, the better (the more sensitive). At higher concentrations, the standard deviation should get larger, and the RSDs should get smaller (while approaching some limit). If the RSDs are between 30% and 100%, a close approach to the detection limit is indicated. [Pg.42]

Determine the correlation coefficient r for the best-fit equation obtained in the problem on p 75 involving thermocouple voltage versus temperature. [Pg.77]

Note Problems 3 through 7 and 10 through 14 are, m part, given at two levels of sophistication and expectation The best" answers involve doing the part marked with an asterisk ( ) instead of the immediately preceding part The parts with an involve the use of the method of least squares, correlation coefficients, and confidence intervals for the slopes and intercepts, the preceding parts do not... [Pg.79]

Table 12.3 compares the estimated analyte concentrations for DIED, PARAFAC, and PARAFAC x 3 noise (PARAFAC with the addition of a factor of three greater random errors) applied to the same calibration problem. Table 12.4 is analogous to Table 12.3, except that it also presents the squared correlation coefficients between the true and estimated X-way and Y-way profiles for all three species present in the six samples. It is first evident that PARAFAC slightly outperforms DTLD when applied to the same calibration problem. However, the improvement often lies in the third or fourth decimal place and is hardly significant when compared with the overall precision of the data. This near equivalence of DTLD and PARAFAC is rooted in the fact that DTLD performs admirably, and there is little room for... [Pg.494]

IA has been further developed by including a so-called coefficient of correlation in order to account for possible correlations between the susceptibilities of the individuals within a population (tolerance correlation) (Plackett and Hewlett 1963a, 1963b). This coefficient is typically denoted as r, which varies from -1 (completely negative correlation) to +1 (completely positive correlation). These extensions of the basic IA equation (Equation 4.4) have been formulated for binary mixtures and quantal endpoints only. An application to multicomponent mixtures is a difficult and yet unresolved problem, because the univariate r results in a high-dimensional correlation matrix, which is not only problematic to determine but also extremely difficult to interpret. Furthermore, for scientifically sound correlation, coefficients that are tailored to the mixture of concern have to be available for each assessment. However, this is hardly ever the case. As a consequence, to our knowledge, no multicomponent mixture study has been published that demonstrates the applicability of IA with r 0. [Pg.128]

Problem 6.1 Determining of Purity Within a Two-component Cluster Derivatives, Correlation Coefficients and PC plots... [Pg.398]


See other pages where Correlation coefficient problem is mentioned: [Pg.218]    [Pg.490]    [Pg.95]    [Pg.275]    [Pg.343]    [Pg.383]    [Pg.191]    [Pg.292]    [Pg.870]    [Pg.275]    [Pg.178]    [Pg.234]    [Pg.96]    [Pg.282]    [Pg.277]    [Pg.118]    [Pg.174]    [Pg.78]    [Pg.138]    [Pg.166]    [Pg.233]    [Pg.271]    [Pg.236]    [Pg.181]    [Pg.559]    [Pg.390]    [Pg.138]    [Pg.22]    [Pg.324]    [Pg.123]    [Pg.65]   
See also in sourсe #XX -- [ Pg.398 , Pg.404 ]




SEARCH



Coefficient correlation

Correlation problem

Pearson Correlation Coefficient Sample Problem

© 2024 chempedia.info