Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Significance of the Regression Coefficients

A regression coefficient is statistically significant if its absolute value is higher than the confidence interval. [Pg.374]

The confidence interval is defined in accordance with Sects. 1.3.2 and 2.1.4  [Pg.374]

1-a confidence where the calculated value of regression coefficient b differs from its real value (3 by the value not higher than the error in estimating coefficient Abj. Thus the confidence interval of a regression coefficient has these limits  [Pg.374]

N-is number of trials that is taken into account when calculating regression coefficients  [Pg.374]

The value Sv is determined in accord with expressions (2.131), (2.133) and (2.135). 2 For linear models, variance or error in determining regression coefficients Sy is [Pg.374]


The problem of the significance of the regression coefficients can be examined only if the statistical data take into consideration the following conditions [5.19] ... [Pg.355]

In statistics, the reproducibility variance is a random variable having a number of degrees of freedom equal to u = N(m — 1). Without the reproducibility variances or any other equivalent variance, we cannot estimate the significance of the regression coefficients. It is important to remember that, for the calculation of this variance, we need to have new statistical data or, more precisely, statistical data not used in the procedures of the identification of the coefficients. This requirement explains the division of the statistical data of Fig. 5.3 into two parts one sigmficant part for the identification of the coefficients and one small part for the reproducibility variance calculation. [Pg.356]

Table 5.18 contains the calculation concerning the significance of the regression coefficients from relation (5.110). However, respect to table 5.6, the rejection condition of the hypothesis has been changed so that we can compare the computed t value (tj) with the t value corresponding to the accepted significance level (t /2)-... [Pg.378]

An alternative method is described by backward elimination. This technique starts with a full equation containing every measured variate and successively deletes one variable at each step. The variables are dropped from the equation on the basis of testing the significance of the regression coefficients, i.e. for each variable is the coefficient zero The F-statistic is referred to as the computed F-to-remove. The procedure is terminated when all variables remaining in the model are considered significant. [Pg.186]

Also the results of the analysis of deviance and the likelihood ratio test of the significance of the regression coefficients were positive for this model. Thus we can say that our model, created by the multidimensional logistic regression, is suitable to predict the morbidity of the patients who undergo the open surgeries of colon in The Faculty Hospital Ostrava. [Pg.1866]

Briiggermann et al. presented a more elaborated test, developed by Mark and Workman,and exemplified it with an ICP (inductively coupled plasma spectrometry) procedure. In essence, it develops a polynomial model and then tests for the statistical significance of the regression coefficients. Its main advantage was claimed to be the avoidance of correlations between the various powers of the concentrations. [Pg.94]

Finally, the model can be validated in a next step. Usually an ANOVA is applied to evaluate the dataset and determine the significance of the terms included in the model. The model can be recalculated with the least nonsignificant terms deleted. Also, tests for the significance of the regression coefficients, the lack of fit test and a residual analysis are often performed. [Pg.193]

If possible one should evaluate the significance of the regression coefficients as explained in Section 2.1.2 and eliminate from the model those considered nonsignificant. A new multiple regression is then performed with the simplified model. It is preferable also to validate the fit of the model and its prediction accuracy. The former is usually done by considering the residuals between the experimental and predicted responses, because this does not require replicate determinations as is needed for the ANOVA procedure. The validation of the prediction accuracy requires that additional experiments are carried out, which are then predicted with the model. The selection of the optimal conditions is often, but not necessarily, done with the aid of visual representation of the response surface, describing y as a function of pairs of variables. The final decision is often a multicriterion problem where multicriterion decision making techniques are applied (Section 5). [Pg.978]

A number of points should be noted concerning the statistics displayed In the table. First, If the researcher wishes to rank the variables In order of their Importance within the equation, absolute values of the beta values are the appropriate Indicators of rank (7, p. 284). Second, the t-values of the regression coefficients give us estimates of the statistical significance of the Independent variables used. Third, the R-square, or coefficient of determination. Is an estimate of the percent of variation In the dependent variable (the functional property) explained by the corresponding regression equation. [Pg.309]

The final statistical values that are reported for Equation 7.3 are the standard errors of the regression coefficients. These allow us to assess the significance of the individual terms by computing a statistic, called the t statistic, by dividing the regression coefficient by its standard error ... [Pg.173]

The correlation coefficient r is a measure of quality of fit of the model. It constitutes the variance in the data. In an ideal situation one would want the correlation coefficient to be equal to or approach 1, but in reality because of the complexity of biological data, any value above 0.90 is adequate. The standard deviation is an absolute measure of the quality of fit. Ideally s should approach zero, but in experimental situations, this is not so. It should be small but it cannot have a value lower than the standard deviation of the experimental data. The magnitude of s may be attributed to some experimental error in the data as well as imperfections in the biological model. A larger data set and a smaller number of variables generally lead to lower values of s. The F value is often used as a measure of the level of statistical significance of the regression model. It is defined as denoted in Equation 1.27. [Pg.10]

To test the significance of each coefficient, we obtain the value of t as the ratio of the regression coefficient to its standard error, and look up this value of t with degrees of freedom equal to those of the residual variance. For ba.tir) for example, t = 0.7685143/0.2709 = 2.837 with 135 degrees of freedom. Reference to Table I in the Appendix shows that this corresponds to a level of significance of between 1% and 0.1%. [Pg.76]

The environmental rain data acquired at the site were compiled in Table 2 for each exposure period. Simple linear regression analysis was performed using each of the factors as the independent variable and the difference in corrosion (Aloss in Table 1) as the dependent variable. All of the regression coefficients were significant. The coefficients for the amount of rainfall were considere to be the most important, since delivery of SO, ... [Pg.197]

Note. In spite of its very common use in regression analysis, the F-ratio test is a weak test for multivariate models. In effect, the null hypothesis of the F test states that all of the regression coefficients are zero, while the alternative hypothesis states that at least one regression coefficient is different from zero. However, in modern QSAR, multivariate models are usually produced where the variables should be all relevant. But, even very high F values mean only that at least one variable is significant and not that the whole model is significant. Validation tools are more suitable to give information about the relevance of all the variables in a model. [Pg.642]

The T test value is a measure of the regression coefficient s significance, i.e., does the coefficient have a real meaning or should it be zero. The larger the absolute value of T the greater the probability that the coefficient is real and should be used for predictions. A T test value 1.7 or higher indicates that there is a high probability that the coefficient is real and the variable has an important effect upon the response. [Pg.177]

Thus, there is little reduntant/irrelevant information in X, with respect to Y. In other words, the FTIR measure is low-selective and the eventual attempt to reduce the munber of X-variables could be problematic. The regression coefficient spectrum (top right) is often useful in understanding the chemistry of the application. Large values imply wavelengths in which there is significant absorption related to the constituent of interest. The overall shape of the regression coefficient spectrum is rather complicated. [Pg.62]


See other pages where Significance of the Regression Coefficients is mentioned: [Pg.64]    [Pg.374]    [Pg.210]    [Pg.176]    [Pg.64]    [Pg.374]    [Pg.210]    [Pg.176]    [Pg.294]    [Pg.219]    [Pg.107]    [Pg.27]    [Pg.494]    [Pg.400]    [Pg.85]    [Pg.101]    [Pg.309]    [Pg.375]    [Pg.546]    [Pg.171]    [Pg.173]    [Pg.127]    [Pg.145]    [Pg.213]    [Pg.139]    [Pg.77]    [Pg.103]    [Pg.220]    [Pg.497]    [Pg.396]    [Pg.237]    [Pg.226]    [Pg.460]    [Pg.579]    [Pg.196]    [Pg.562]   


SEARCH



Coefficient of regression

Coefficient of the

Coefficient regression

Regression significance

Significance of regression

Significance of the regression

© 2024 chempedia.info