Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression mean square

The data lie on a straight line only for Plot (1), the graph of [HI] vs. t. Therefore, the reaction is zero order with respect to HI. The slope of the line = -0.00546 mM-s-1, using a least mean square regression fitting program. However, the slope can be estimated from any two points on the line. If we use the first and last points ... [Pg.266]

Sum of squares Degrees of freedom Mean square Regression Residual Lack of fit, Pure error. [Pg.104]

The ordered F ratio for a single predictor is defined by the ratio of mean squares, regression/ residual ... [Pg.2278]

The results obtained by lineal regression (LR) and by pareial least square regression (PLS) methods have been eompared to quantify tlie 0-H signal in anhydrite samples. The PLS quality is eharaeterized by a eoirelation eoeffieient of 0.9942 (eross-validation) using four faetors and a root mean square eiTor of ealibration (RMSEC) of 0.058. Tlie eoirelation eoeffieient of LR metliod obtained was 0.9753. [Pg.200]

A reading of Section 2.2 shows that all of the methods for determining reaction order can lead also to estimates of the rate constant, and very commonly the order and rate constant are determined concurrently. However, the integrated rate equations are the most widely used means for rate constant determination. These equations can be solved analytically, graphically, or by least-squares regression analysis. [Pg.31]

One asterisk indicates significance at 95%, two asterisks at 99% level. NS, not significant at 95% level. Calculated by dividing mean square of line by mean square for error in this case deviations from double regression are used as an estimate of error. Significance determined from tables cf., e.g., G. W. Snedecor, Statistical Methods, 4th Edn. Iowa State College Press, Ames, 1946. [Pg.260]

A difficulty with Hansch analysis is to decide which parameters and functions of parameters to include in the regression equation. This problem of selection of predictor variables has been discussed in Section 10.3.3. Another problem is due to the high correlations between groups of physicochemical parameters. This is the multicollinearity problem which leads to large variances in the coefficients of the regression equations and, hence, to unreliable predictions (see Section 10.5). It can be remedied by means of multivariate techniques such as principal components regression and partial least squares regression, applications of which are discussed below. [Pg.393]

The quantities AUMC and AUSC can be regarded as the first and second statistical moments of the plasma concentration curve. These two moments have an equivalent in descriptive statistics, where they define the mean and variance, respectively, in the case of a stochastic distribution of frequencies (Section 3.2). From the above considerations it appears that the statistical moment method strongly depends on numerical integration of the plasma concentration curve Cp(r) and its product with t and (r-MRT). Multiplication by t and (r-MRT) tends to amplify the errors in the plasma concentration Cp(r) at larger values of t. As a consequence, the estimation of the statistical moments critically depends on the precision of the measurement process that is used in the determination of the plasma concentration values. This contrasts with compartmental analysis, where the parameters of the model are estimated by means of least squares regression. [Pg.498]

The MSWD and probability of fit All EWLS algorithms calculate a statistical parameter from which the observed scatter of the data points about the regression line can be quantitatively compared with the average amount of scatter to be expected from the assigned analytical errors. Arguably the most convenient and intuitively accessible of these is the so-called ATS ITD parameter (Mean Square of Weighted Deviates McIntyre et al. 1966 Wendt and Carl 1991), defined as ... [Pg.645]

Optimization of the PPR model is based on minimizing the mean-squares error approximation, as in back propagation networks and as shown in Table I. The projection directions a, basis functions 6, and regression coefficients /3 are optimized, one at a time for each node, while keeping all other parameters constant. New nodes are added to approximate the residual output error. The parameters of previously added nodes are optimized further by backfitting, and the previously fitted parameters are adjusted by cyclically minimizing the overall mean-squares error of the residuals, so that the overall error is further minimized. [Pg.39]

A variety of statistical parameters have been reported in the QSAR literature to reflect the quality of the model. These measures give indications about how well the model fits existing data, i.e., they measure the explained variance of the target parameter y in the biological data. Some of the most common measures of regression are root mean squares error (rmse), standard error of estimates (s), and coefficient of determination (R2). [Pg.200]

This is, then, the regression sum of squares due to the first-order terms of Eq. (69). Then, we calculate the regression sum of squares using the complete second-order model of Eq. (69). The difference between these two sums of squares is the extra regression sum of squares due to the second-order terms. The residual sum of squares is calculated as before using the second-order model of Eq. (69) the lack-of-fit and pure-error sums of squares are thus the same as in Table IV. The ratio contained in Eq. (68) still tests the adequacy of Eq. (69). Since the ratio of lack-of-fit to pure-error mean squares in Table VII is smaller than the F statistic, there is no evidence of lack of fit hence, the residual mean square can be considered to be an estimate of the experimental error variance. The ratio... [Pg.135]

F =MSLF/MSPE, based on the ratio mean square for lack of fit (MSLF) over the mean square for pure error (MSPE) ( 31 ). F follows the F distribujfion with (r-2) and (N-r) degrees of freedom. A value of F regression equation. Since the data were manipulated by transforming the amount values jfo obtain linearity, i. e., to achieve the smallest lack of fit F statistic, the significance level of this test is not reliable. [Pg.147]

Another figure of merit for the fit of a linear regression model is the root mean square error of estimate (RMSEE), defined as ... [Pg.361]

Different baseline correction methods vary with respect to the both the properties of the baseline component d and the means of determining the constant k. One of the simpler options, baseline ojfset correction, nses a flat-line baseline component (d = vector of Is), where k can be simply assigned to a single intensity of the spectrum x at a specific variable, or the mean of several intensities in the spectrum. More elaborate baseline correction schemes allow for more complex baseline components, such as linear, quadratic or user-defined functions. These schemes can also utilize different methods for determining k, such as least-squares regression. [Pg.370]

Model statistics include R, adjusted R and root mean squared error. Parameter statistics are the estimated regression coefficients and associated statistics. [Pg.315]

Acceptable models have R- and adjusted values near 1 and a root mean squared error that is comparable to the known.errors. Only statistically significant regression coejBficients are included in the model. [Pg.315]

The three saturated long-chain tert-butyl peresters are members of a homologons series, and as such, the weighted least-squares regression analysis of the enthalpies of formation V5. number of carbons yields a methylene increment of —26.7 kJmol , a typical valne for liquids. The methylene increment for the terf-butyl esters of the Cg, Cjo, Cn and C14 acids is —28.0 kJmol. The closeness of these two values ensures that the enthalpies of formal reaction 16 will be nearly constant. For the three pairs from Table 3, the value is —70.3 8.1 kJmol. The standard deviation from the mean is quite large because the arithmetic difference for the C12 ester and perester, —79.5 kJmol, is quite a bit more negative than the differences for the Cio and C14 pairs, —64.4 and —66.9 kJmol, respectively. Unfortunately, the acids and esters are in different phases and so we are reluctant to attempt any comparison between them, such as a formal hydrolysis reaction or disproportionation with hydrogen peroxide. [Pg.160]

This is similar to a least-squares regression, where the mean error is zero, but the sum of square error is not. We will first deal with the x-component of our convective transport terms ... [Pg.100]

Since the stability data from individual lots are pooled, these data are examined for validity by the F test. The mean square of the regression coefficient (slope) is divided by the mean square of the deviation within lots, and similarly, the adjusted mean (y intercept) is divided by the common mean square to give the respective F ratios. The latter values then are compared with the critical 5% F values. When the calculated F values are smaller than the critical F values, the data may be combined and the pooled data analyzed. [Pg.691]


See other pages where Regression mean square is mentioned: [Pg.688]    [Pg.497]    [Pg.715]    [Pg.51]    [Pg.72]    [Pg.253]    [Pg.89]    [Pg.313]    [Pg.562]    [Pg.40]    [Pg.310]    [Pg.37]    [Pg.257]    [Pg.23]    [Pg.476]    [Pg.237]    [Pg.273]    [Pg.227]    [Pg.151]    [Pg.587]    [Pg.123]    [Pg.168]    [Pg.34]    [Pg.281]    [Pg.310]    [Pg.219]    [Pg.610]   
See also in sourсe #XX -- [ Pg.440 ]

See also in sourсe #XX -- [ Pg.440 ]




SEARCH



© 2024 chempedia.info