Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance constant

The essential assumption of this manuscript is the existence of a constant variance of Gaussian errors along the trajectory. While we attempted to correlate the variance with the high frequency motions, many uncertainties and questions remain. These are topics for future research. [Pg.279]

Various Langmiiir-Hinshelwood mechanisms were assumed. GO and GO2 were assumed to adsorb on one kind of active site, si, and H2 and H2O on another kind, s2. The H2 adsorbed with dissociation and all participants were assumed to be in adsorptive equilibrium. Some 48 possible controlling mechanisms were examined, each with 7 empirical constants. Variance analysis of the experimental data reduced the number to three possibilities. The rate equations of the three reactions are stated for the mechanisms finally adopted, with the constants correlated by the Arrhenius equation. [Pg.2079]

There are some restrictions that we do not consider here. Our primary requirement is that the y, are normally distributed (for a given set of Xjj) about their mean true values with constant variance. We also, for the present, assume that the errors in the Xjj are negligible relative to those in y,. [Pg.42]

This expression constitutes an improvement. There are two advantages. First, the statistical reliability of the data analysis improves, because the variance in [A] is about constant during the experiment, whereas that of the quantity on the left side of Eq. (3-27) is not. Proper least-squares analysis requires nearly constant variance of the dependent variable. Second, one cannot as readily appreciate what the quantity on the left of Eq. (3-27) represents, as one can do with [A]t. Any discrepancy can more easily be spotted and interpreted in a display of (A] itself. [Pg.51]

In order to formulate the statistical problem generally, let us return to the Arrhenius graph (Figure 5) and ask the question of how to estimate the position of the common point of intersection, if it exists (162). That is, in the coordinates x = T and y = log k, a family of 1 straight lines is given with the slopes bj (i = 1,2,..3) and with a common point of intersection (xq, yo). The ith line is determined by mj points (m > 2) with coordinates (xy, yjj) where j = 1,2,..., mj. Instead of the true coordinates yy, only the values yy = yy + ey are available, ey being random variables with a zero average value and a constant variance,. If the hypothesis of a common point of intersection is accepted, ey may be identified with the experimental error. [Pg.440]

Statistical testing of model adequacy and significance of parameter estimates is a very important part of kinetic modelling. Only those models with a positive evaluation in statistical analysis should be applied in reactor scale-up. The statistical analysis presented below is restricted to linear regression and normal or Gaussian distribution of experimental errors. If the experimental error has a zero mean, constant variance and is independently distributed, its variance can be evaluated by dividing SSres by the number of degrees of freedom, i.e. [Pg.545]

Parameter estimates are tested on whether they differ significantly from zero at a certain probability level. If not, the parameter should be skipped from the model and the model should be redefined even if the test for model adequacy was positive. When the errors have constant variance, the random variable... [Pg.547]

Even if we make the stringent assumption that errors in the measurement of each variable ( >,. , M.2,...,N, j=l,2,...,R) are independently and identically distributed (i.i.d.) normally with zero mean and constant variance, it is rather difficult to establish the exact distribution of the error term e, in Equation 2.35. This is particularly true when the expression is highly nonlinear. For example, this situation arises in the estimation of parameters for nonlinear thermodynamic models and in the treatment of potentiometric titration data (Sutton and MacGregor. 1977 Sachs. 1976 Englezos et al., 1990a, 1990b). [Pg.20]

If we assume that the residuals in Equation 2.35 (e,) are normally distributed, their covariance matrix ( ,) can be related to the covariance matrix of the measured variables (COV(sy.,)= LyJ through the error propagation law. Hence, if for example we consider the case of independent measurements with a constant variance, i.e. [Pg.20]

This choice of Qj yields ML estimates of the parameters if the error terms in each response variable and for each experiment (Ey, i=I,...N j=l,...,m) are independently distributed normally with zero mean and constant variance. Namely, the variance of a particular response variable is constant from experiment to experiment however, different response variables have different variances, i.e.,... [Pg.27]

In this case we assume that e is white noise, i.e., the e s are identically and independently distributed normally with zero mean and a constant variance cr. Thus, the model equation can be rewritten as... [Pg.219]

The values of the elements of the weighting matrices R, depend on the type of estimation method being used. When the residuals in the above equations can be assumed to be independent, normally distributed with zero mean and the same constant variance, Least Squares (LS) estimation should be performed. In this case, the weighting matrices in Equation 14.35 are replaced by the identity matrix I. Maximum likelihood (ML) estimation should be applied when the EoS is capable of calculating the correct phase behavior of the system within the experimental error. Its application requires the knowledge of the measurement... [Pg.256]

Now we ask ourselves the question If we calculate the standard deviation for a set of data (or errors) from these two formulas, will they give us the same answer And the answer to that question is that they will, IF (that s a very big if , you see) the data and the errors have the characteristics that statisticians consider good statistical properties random, independent (uncorrelated), constant variance, and in this case, a Normal distribution, and for errors, a mean (fi) of zero, as well. For a set of data that meets all these criteria, we can expect the two computations to produce the same answer (within the limits of what is sometimes loosely called Statistical variability ). [Pg.427]

Now, if (m2 > g), the solution of Eq. (10.24), under the assumption of an independent and normal error distribution with constant variance can be obtained as the maximum likelihood estimator of d and is given by... [Pg.206]

Taking into account deviations from constant variance is a particularly straightforward matter. If, for example, the variable y of Eq. (19) possesses a variance a which varies from data point to data point, then Eq. (19) divided by a,-2 is the appropriate sum of squares to minimize. If, in general, we define... [Pg.114]

In transforming the independent variables alone, it is assumed that the dependent variable already has all the properties desired of it. For example, if the /s are normally and independently distributed with constant variance, at least approximately, then any transformations such as described in Section VI,B,1 would be unnecessary. Under such assumptions, Box and Tidwell (B17) have shown how to transform the independent variables to reduce a fitted linear function to its simplest form. For example, a function that has been empirically fitted by... [Pg.161]

Residuals are uncorrelated and normally distributed with mean 0 and constant variance cr... [Pg.135]

Both assumptions are mainly needed for constructing confidence intervals and tests for the regression parameters, as well as for prediction intervals for new observations in x. The assumption of normal distribution additionally helps avoid skewness and outliers, mean 0 guarantees a linear relationship. The constant variance, also called homoscedasticity, is also needed for inference (confidence intervals and tests). This assumption would be violated if the variance of y (which is equal to the residual variance a2, see below) is dependent on the value of x, a situation called heteroscedasticity, see Figure 4.8. [Pg.135]

Two procedures for improving precision in calibration curve-based-analysis are described. A multiple curve procedure is used to compensate for poor mathematical models. A weighted least squares procedure is used to compensate for non-constant variance. Confidence band statistics are used to choose between alternative calibration strategies and to measure precision and dynamic range. [Pg.115]

Calibration curves yield the best precision at the mean concentration of the standards. For example, a curve based on standard with concentrations of 1, 4 and 10 yields best precision at 5 (assuming constant variance). To achieve maximum precision the standards should be selected so that their mean concentration is equal to the most important sample concentration, such as an action level. The curve will yield increasingly poor precision with increasing distance from this mean. [Pg.116]

The least-squares curve-of-best-fit procedure implicitly assumes the same variance (standard deviation) at all concentrations. This assumption is rarely correct. Figure 3a shows hypothetical replicate standard analysis data with constant variance. This pattern is almost never seen in routine chemical analyses. Figure 3b shows a much more realistic pattern in which the variance increases with concentration. [Pg.116]

These factors are used in the equations given in Table I. The computation requires only that the variance ratios be accurately known. The absolute precision of the method may change from day to day without affecting the validity of either the least-squares curve-of-best fit procedure or the confidence band calculations. (It is not practical to regularly monitor local variances, and errors may develop in variance ratios. Eowever, the error due to incorrect ratios will almost always be much less than the error due to assuming constant variance. Even guessed values of, say, S a concentration are likely to yield more precise data.)... [Pg.122]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

We will describe an accurate statistical method that includes a full assessment of error in the overall calibration process, that is, (I) the confidence interval around the graph, (2) an error band around unknown responses, and finally (3) the estimated amount intervals. To properly use the method, data will be adjusted by using general data transformations to achieve constant variance and linearity. It utilizes a six-step process to calculate amounts or concentration values of unknown samples and their estimated intervals from chromatographic response values using calibration graphs that are constructed by regression. [Pg.135]

Figure 1. Plots showing the Calibration Process. A. Response transformation to constant variance Examples showing a. too little, b. appropriate, and c. too much transformation power. B. Amount Transformation in conforming to a (linear) model. C. Construction of p. confidence bands about the regressed line, q. response error bounds and intersection of these to determine r. the estimated amount interval. Figure 1. Plots showing the Calibration Process. A. Response transformation to constant variance Examples showing a. too little, b. appropriate, and c. too much transformation power. B. Amount Transformation in conforming to a (linear) model. C. Construction of p. confidence bands about the regressed line, q. response error bounds and intersection of these to determine r. the estimated amount interval.
Since data from chromatography standards typically do not satisfy the assumptions of constant variance nor linearity, a procedure described above for fitting a family of transformations on the y and x. will be used We assume for the rest of... [Pg.138]

Mitchell, and Hills ( ) They use weighted least squares to resolve the non-constant variance of the response signal for different concentrations, whereas we transform the response to achieve constant variance. [Pg.142]

Response Transformation. Step 1. We found that the calibration graph response data obtained from gas chromatography seldom have constant variance along the length of the graph. The data in Tables I-III clearly show that the larger the response value the larger the variance of the response at that level. Fenvalerate in Table I, chlordecone (kepone) in Table II and chlorothalonil in Table III have the information for untreated data (at a... [Pg.142]

This situation shows two problems The application of ordinary least squares estimation, which requires constant variance, is not appropriate with untreated data. Then, the large variance of the largest numbers in such data excessively controls the direction or slope of the graph. [Pg.144]

The solution to the problem of non-constant variance (or heteroscedasticity) rests in several suggestions. The simplest is to limit the range of the graph ( 1 ). The range, however, would be so small that it would be ineffective to use it practically. [Pg.144]

Another solution to the problem of non-constant variance is to transform the response data. A common way of transforming data has been by taking the logarithms of both the response and amount variables ( 8-10 ). However, for all the data we looked at, the log transformation has been too strong. See Tables I and V. [Pg.144]

Constant Variances. Response values from the electron capture chromatographic analysis of the insecticide fenvalerate, were transformed by the process described above. The six response values at each of six different amount levels were transformed by a series of powers, and the variances calculated at each level (Table I). For a transformation power of 0.5 the value of the variances increased from 0.001 to 0.338 as the response increased. When the logarithm of the response was used, the value of the variances decreased from 0.00085 to 0.00008 as the response increased. Raising the responses to the 0.15 power gave calculated variances that remained roughly constant across the range of amounts. [Pg.145]

From the authors experience not all real data sets can be transformed to constant variance using power transformations. Instrumentation imperfections in our laboratory resulted in data that had variable variances despite our attempts at transformation. The transformed chlorothalonil data set, as shown in Table III illustrates a set where the transformations attempted nearly failed to give constant variance across the response range in this case the Hartley criterion was barely satisfied. The replications at the 0.1 and 20. ng levels had excessively high variance over the other levels. An example where constant variance was easily achieved utilized data of the insecticide chlordecone (kepone) also on the electron capture detector. Table II shows that using a transformation power of 0.3 resulted in nearly constant variance. [Pg.146]

Transformation Power of Selected Data Sets. Hartley statistic values are shown in Tables I-III for fenvalerate, chlordecone, and chlorothalonil. In each case a power transformation was found of sufficient size at a 93% probability which satisfied the H criterion. For fenvalerate the power of 0.15 was satisfactory for constant variance. For chlordecone the whole range of powers from 0.30 to 0.10 satisfied the critical H value (listed in order of increasing transformation power). Despite apparent non-constancy of data for chlorothalonil shown in Table III, the critical H was satisfied for the range in power transformation from 0.23 to 0.10. [Pg.146]

Examination of Data. At this point, examination of the plot of regression residuals verses transformed amount showed two conditions First, the condition of constant variance across the... [Pg.150]

However, with improper transformation the calculation of confidence bands and amount interval estimates is erroneous because of the non-constant variance ... [Pg.164]

Differences in calibration graph results were found in amount and amount interval estimations in the use of three common data sets of the chemical pesticide fenvalerate by the individual methods of three researchers. Differences in the methods included constant variance treatments by weighting or transforming response values. Linear single and multiple curve functions and cubic spline functions were used to fit the data. Amount differences were found between three hand plotted methods and between the hand plotted and three different statistical regression line methods. Significant differences in the calculated amount interval estimates were found with the cubic spline function due to its limited scope of inference. Smaller differences were produced by the use of local versus global variance estimators and a simple Bonferroni adjustment. [Pg.183]

The methods used were those of Mitchell ( 1 ), Kurtz, Rosenberger, and Tamayo ( 2 ), and Wegscheider T ) Mitchell accounted for heteroscedastic error variance by using weighted least squares regression. Mitchell fitted a curve either to all or part of the calibration range, using either a linear or a quadratic model. Kurtz, et al., achieved constant variance by a... [Pg.183]


See other pages where Variance constant is mentioned: [Pg.17]    [Pg.132]    [Pg.156]    [Pg.203]    [Pg.126]    [Pg.159]    [Pg.121]    [Pg.133]    [Pg.134]    [Pg.136]    [Pg.138]    [Pg.164]    [Pg.184]   


SEARCH



Constant variance assumption

Levene Test for Constant Variance

Non-constant variance

© 2024 chempedia.info