Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression residual analysis

Regression residual analysis and examination of families with the greatest percentage of genera collected clearly demonstrates that woody plant families have generally been sampled more thoroughly than herbaceous groups. However, examination of families that were poorly sampled reveals a number of patterns 103 families had not been sampled at... [Pg.42]

The residuals (log 1/C observed - log 1/C predicted) for the entire dataset were examined in detail for each key regression equation. Analysis of major residuals allowed not only perception of true "outliers but also provided insights for improving the model i.e., sharpening hypotheses and concepts. [Pg.326]

Thereby we have to consider that the outlier test assumes the chosen approach for the regression function to be correct. First we should have a look on the plot of the residual analysis, because from there we can recognise potential outliers. We calculate the regression both with and without the potential outlier. Then we can apply either the F-test or the t-test... [Pg.191]

Figure 12. Graphs of observed versus diatom-inferred total phosphorus concentrations (TP) and observed minus diatom-inferred TP (i.e., a residual analysis) are based on weighted averaging regression and calibration models and classical deshrinking. The large circles indicate two coincident values. This analysis is discussed in detail in reference 46. Figure 12. Graphs of observed versus diatom-inferred total phosphorus concentrations (TP) and observed minus diatom-inferred TP (i.e., a residual analysis) are based on weighted averaging regression and calibration models and classical deshrinking. The large circles indicate two coincident values. This analysis is discussed in detail in reference 46.
However, the error introduced is in most cases negligible when compared to the analytical errors in residue analysis. To deal with this problem, inverse regression and regression with errors on both axes have been proposed in the literature [9]. [Pg.138]

Residue Analysis Methods. Among the methods routinely used for residue analysis are determination of weight loss, extraction of the residue and quantitation of the extract by either gas chromatography or liquid scintillation counting, and measurement of the increase in void space within a fiber with time, or meniscus regression method. [Pg.145]

The issue of goodness-of-fif with nonlinear regression is not straightforward. Numerous methods can be used to explore the goodness-of-fif of the model to the data (e.g., residual analysis, variance analysis, and Chi-squared analysis). It is always a good idea to inspect the plot of the predicted [y(x,)] versus observed y, values to watch for systematic deviations. Additionally, some analytical measure for goodness-of-fit should also be employed. [Pg.348]

Regression analysis includes not only the estimation of model regression parameters, but also the calculation of goodness of fit and -> goodness of prediction statistics, regression diagnostics, residual analysis, and influence analysis [Atkinson, 1985]. [Pg.62]

Regression coefficients, partial, 172 standardized, 168 Regression, linear, 156 multivariate, 171 polynomial, 163 through origin, 162 Residuals, 13 Residuals analysis, 159 Ridge regression, 203 RMS noise, 31 Roots, characteristic, 73... [Pg.216]

Residual analysis is of vital importance in any regression analysis. A residual analysis entails the careful evaluation of the differences between the observed values and the predicted values of the dependent variable after fitting a regression model to the data. Residual plots are used interalia with a view to identifying any undetected tendencies in the data, as well as outliers and fluctuation in the variance of the dependent variable (21). However, interpretation of such residual plots requires great care on account of the possible degree of subjectivity involved therein. [Pg.389]

One way to identify important predictor variables in a multiple regression setting is to do all possible regressions and choose the model based on some criteria, usually the coefficient of determination, adjusted coefficient of determination, or Mallows Cp. With this approach, a few candidate models are identified and then further explored for residual analysis, collinearity diagnostics, leverage analysis, etc. While useful, this method is rarely seen in the literature and cannot be advocated because the method is a dummy-ing down of the modeling process—the method relies too much on blind usage of the computer to solve a problem that should be left up to the modeler to solve. [Pg.64]

A comparison of the two-compartment first-order absorption model fit to measured plasma concentration data from a traditional method of residuals analysis and a nonlinear regression analysis is provided in Figure 10.99. This figure illustrates the fact that both methods offer a very reasonable fit to the measured data. It also demonstrates that there is not a large difference between the fit provided by the two different techniques. Close examination does reveal, however, that the nonlinear regression analysis does provide a more universal fit to all the data points. This is likely due to the fact that nonlinear regression fits all the points simultaneously, whereas the method of residuals analysis fits the data in a piecewise manner with different data points used for different regions of the curve. [Pg.271]

Mean square error. A term with two meanings (1) The residual sum of squares from a regression or analysis of variance divided by the degrees of freedom which belong to it. (These degrees of freedom are generally equal to the number of observations minus the number of parameters fitted.) (2) The sum of the square of the bias of an estimator and its variance. This is a quantity that can be used to compare estimators. The smaller the mean square error is, the more useful the estimator. [Pg.467]

For example (Figure 1.3), the predicted regression values, % are linear, but the actual, y, values are curvilinear. A residual analysis quickly would show this. The e, values initially are negative, then are positive in the middle range of x, values, and then negative again in the upper x, values (Figure 1.4). If the model fits the data, there would be no discemable pattern about 0, just random e, values. [Pg.12]

Up to this point, we have looked mainly at residual plots, such as e, vs. c, vs. y and c, vs. y to help evaluate how well the regression model fits the data. There is much that can be done with this type of eye-ball approach. In fact, the present author uses this procedure in at least 90% of the work he does, but there are times when this approach is not adequate and a more quantitative procedure of residual analysis is required. [Pg.147]

In Chapter 3, it was noted that residual analysis is very useful for exploring the effects of outliers and nonnormal distributions of data, for how these relate to adequacy of the regression model, and for identifying and correcting for serially correlated data. At the end of the chapter, formulas for the process of... [Pg.307]

By means of residual analysis, the assumptions for the linear regression as well as the deviations from the model can be checked. [Pg.227]

Figure 6.4 Residual analysis in linear regression, (a) Time-dependent observations, (b) Heteroscedasticity. (c) Linear effects, (d) Quadratic effect. Figure 6.4 Residual analysis in linear regression, (a) Time-dependent observations, (b) Heteroscedasticity. (c) Linear effects, (d) Quadratic effect.
Estimates of linear regression coefficients, standard errors of coefficients, significance of coefficients, blocking of variables, residual calculation and residual analysis, standard ANOVA, weighted least-squares analysis... [Pg.61]

As with univariate regression, an analysis of the residuals is important in evaluating the model. The residuals should be randomly and normally distributed. Figure 8.11 shows a plot of the residuals against the fitted values for Cl , the residuals do not show any particular pattern. Figure 8.12 plots the predicted values against the measured values. The points are reasonably close to a straight line with no obvious outliers. [Pg.230]

In order to conduct ordinaiy least square regression, some assumptions have to be met, which address linearity, normally of data distribution, constant variance of the error terms, independence of the error terms, and normality of the error term distribution (Cohen 2003 Hair et al. 1998], Whereas the former two can be assessed before performing the actual regression analysis, the latter three can only be evaluated ex post I will thus anticipate some of the regression results to check, if the assumptions with respect to the regression residuals are met... [Pg.137]

As an alternative to the approximation test according to Mandel, residual analysis [29] can be used for linearity testing. The residuals di are the vertical distances of the measuring values fi om the regression curve. [Pg.952]


See other pages where Regression residual analysis is mentioned: [Pg.41]    [Pg.29]    [Pg.41]    [Pg.29]    [Pg.294]    [Pg.203]    [Pg.335]    [Pg.147]    [Pg.270]    [Pg.208]    [Pg.12]    [Pg.593]    [Pg.228]    [Pg.156]    [Pg.143]    [Pg.268]    [Pg.269]   


SEARCH



Analysis of regression residuals

Regression analysis

Residuals analysis

Residuals, in regression analysis

Residue analysis

© 2024 chempedia.info