Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear regression with errors

CA Weighted Linear Regression with Errors in Both x andy... [Pg.127]

Algorithms for performing a linear regression with errors in both X and y are discussed in... [Pg.134]

Jacquez, J. A. Mather, F. J. and Crawford, C. R. in "Linear Regression with Non-Constant, Unknown Error Variances Biometrics 1968, 2 , 607. [Pg.80]

Repeat the input identification experiment with the model order MD = 2. Compare the linear regression residual errors for the two cases. Select the "best" model order on the basis of the Akaike Information Criterion (see Section 3.10.3 and ref. 27). ... [Pg.310]

Fig. 1 shows such a plot and illustrates that the error in the value of log increases with time, i.e. as the concentration approaches its limiting value at / = oo, and is asymmetrically distributed. As the values of log 4 at long times are less reliable than those determined at short times, it is not formally correct to use a simple linear regression analysis to fit such a logarithmic plot and to determine k bs-Rather a non-linear regression with the data weighted to account for the increase in error with time and its asymmetric distributions should be employed. Nevertheless, it is often adequate to use a simple linear regression provided that only data collected over, say, the first 90% of the reaction (i.e. 3 half-lives) are used (see Fig. [Pg.116]

Analysis by linear regression is the fit of given measurements by a straight line. The programs LISREL and SEPATH (see [4]-[7]) provide linear regression with some causal input. The linear or linearized response of a system to a small perturbation depends on the location of the system in the space of its variables. To obtain a linear response in a possibly nonlinear system requires small perturbations, and hence to be useful the errors in the measurements must be small—smaller than the perturbations. The issues of importance here are ... [Pg.5]

To show an application of the method of linear least squares to data collected in a laboratory, a procedure has been developed in which a beaker containing water was heated in a domestic microwave oven, and the water temperature measured as a function of time and power. Students obtained a regression line for each power level screened and determined the intercept and slope of the line. They then compared and contrasted the values obtained for initial temperature and power input using the linear regression with those set experimentally, outlining sources of error. [Pg.169]

Statistical testing of model adequacy and significance of parameter estimates is a very important part of kinetic modelling. Only those models with a positive evaluation in statistical analysis should be applied in reactor scale-up. The statistical analysis presented below is restricted to linear regression and normal or Gaussian distribution of experimental errors. If the experimental error has a zero mean, constant variance and is independently distributed, its variance can be evaluated by dividing SSres by the number of degrees of freedom, i.e. [Pg.545]

Two models of practical interest using quantum chemical parameters were developed by Clark et al. [26, 27]. Both studies were based on 1085 molecules and 36 descriptors calculated with the AMI method following structure optimization and electron density calculation. An initial set of descriptors was selected with a multiple linear regression model and further optimized by trial-and-error variation. The second study calculated a standard error of 0.56 for 1085 compounds and it also estimated the reliability of neural network prediction by analysis of the standard deviation error for an ensemble of 11 networks trained on different randomly selected subsets of the initial training set [27]. [Pg.385]

The data are also represented in Fig. 39.5a and have been replotted semi-logarithmically in Fig. 39.5b. Least squares linear regression of log Cp with respect to time t has been performed on the first nine data points. The last three points have been discarded as the corresponding concentration values are assumed to be close to the quantitation limit of the detection system and, hence, are endowed with a large relative error. We obtained the values of 1.701 and 0.005117 for the intercept log B and slope Sp, respectively. From these we derive the following pharmacokinetic quantities ... [Pg.460]

Figure 19. Variation in dimension of the X-fold cation site in alkali-feldspar for 2+ cations (), obtained by fitting the experimental data of Icenhower and London (1996) for Dca, >sr and Dsa at 0.2 GPa and 650-750°C. In performing the fits was set at 91 GPa for all runs. Error bars are 1 s.d. The positive slope is consistent with measured changes in metal-oxygen bond length from albite to orthoclase (cf Fig. 6). The solid line shows the best-fit linear regression given in Equation (35). Figure 19. Variation in dimension of the X-fold cation site in alkali-feldspar for 2+ cations (), obtained by fitting the experimental data of Icenhower and London (1996) for Dca, >sr and Dsa at 0.2 GPa and 650-750°C. In performing the fits was set at 91 GPa for all runs. Error bars are 1 s.d. The positive slope is consistent with measured changes in metal-oxygen bond length from albite to orthoclase (cf Fig. 6). The solid line shows the best-fit linear regression given in Equation (35).
Y - Xfia. If the number of input variables is greater than the number of observations, there is an infinite number of exact solutions for the least squares or linear regression coefficients, /3a. If the variables and observations are equal, there is a unique solution for /3a, provided that X has full rank. If the number of variables is less than the number of measurements, which is usually the case with process data, there is no exact solution for /3a (Geladi and Kowalski, 1986), but a can be estimated by minimizing the least-squares error between the actual and predicted outputs. The solution to the least-squares approximation problem is given by the pseudoinverse as... [Pg.35]

When experimental data is to be fit with a mathematical model, it is necessary to allow for the fact that the data has errors. The engineer is interested in finding the parameters in the model as well as the uncertainty in their determination. In the simplest case, the model is a linear equation with only two parameters, and they are found by a least-squares minimization of the errors in fitting the data. Multiple regression is just linear least squares applied with more terms. Nonlinear regression allows the parameters of the model to enter in a nonlinear fashion. See Press et al. (1986) for a description of maximum likelihood as it applies to both linear and nonlinear least squares. [Pg.84]


See other pages where Linear regression with errors is mentioned: [Pg.119]    [Pg.124]    [Pg.392]    [Pg.119]    [Pg.124]    [Pg.392]    [Pg.133]    [Pg.260]    [Pg.264]    [Pg.120]    [Pg.104]    [Pg.143]    [Pg.127]    [Pg.29]    [Pg.31]    [Pg.491]    [Pg.497]    [Pg.93]    [Pg.848]    [Pg.32]    [Pg.250]    [Pg.61]    [Pg.114]    [Pg.58]    [Pg.432]    [Pg.316]    [Pg.351]    [Pg.367]    [Pg.501]    [Pg.602]    [Pg.646]    [Pg.226]    [Pg.172]    [Pg.179]    [Pg.337]   
See also in sourсe #XX -- [ Pg.124 , Pg.125 , Pg.126 , Pg.126 ]




SEARCH



Errors with

Linear regression

Regression errors

Unweighted Linear Regression with Errors in

Unweighted linear regression, with errors

Weighted Linear Regression with Errors in

Weighted linear regression with errors

© 2024 chempedia.info