Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear regression and

Repeat Problem 7.1 using the entire set. First do a preliminary analysis using linear regression and then make a final determination of the model parameters using nonlinear regression. [Pg.252]

Calculate linear regression and display graph points, regression line, upper and lower 95% confidence limits CL for regression line... [Pg.352]

Relaxation data may be analyzed by two general methods a two-parameter, linear regression and a three-parameter, nonlinear, fitting procedure. " " The first method requires an accurate experimental determination of Mo, which is both difficult and time-consuming. Furthermore, the... [Pg.142]

Model parameters estimated by linear regression, weighted linear regression, and unweighted non-linear regression are shown in Table B-1. [Pg.544]

Statistical testing of model adequacy and significance of parameter estimates is a very important part of kinetic modelling. Only those models with a positive evaluation in statistical analysis should be applied in reactor scale-up. The statistical analysis presented below is restricted to linear regression and normal or Gaussian distribution of experimental errors. If the experimental error has a zero mean, constant variance and is independently distributed, its variance can be evaluated by dividing SSres by the number of degrees of freedom, i.e. [Pg.545]

In a general way, we can state that the projection of a pattern of points on an axis produces a point which is imaged in the dual space. The matrix-to-vector product can thus be seen as a device for passing from one space to another. This property of swapping between spaces provides a geometrical interpretation of many procedures in data analysis such as multiple linear regression and principal components analysis, among many others [12] (see Chapters 10 and 17). [Pg.53]

As explained before, the FT can be calculated by fitting the signal with all allowed sine and cosine functions. This is a laborious operation as this requires the calculation of two parameters (the amplitude of the sine and cosine function) for each considered frequency. For a discrete signal of 1024 data points, this requires the calculation of 1024 parameters by linear regression and the calculation of the inverse of a 1024 by 1024 matrix. [Pg.530]

The objectives in this chapter are two. The first one is to briefly review the essentials of linear regression and to present them in a form that is consistent with our notation and approach followed in subsequent chapters addressing nonlinear regression problems. The second objective is to show that a large number of linear regression problems can now be handled with readily available software such as Microsoft Excel and SigmaPlot . [Pg.23]

Fig. 4 Rescaled data from Fig. 3b to show the linear relationship predicted by Eq. 16. The bulk equilibrium melting temperature Ec/k T is chosen to be approximately 0.2. The lines are the results of linear regression, and the symbols are for the variable values of B/Ec [14]... Fig. 4 Rescaled data from Fig. 3b to show the linear relationship predicted by Eq. 16. The bulk equilibrium melting temperature Ec/k T is chosen to be approximately 0.2. The lines are the results of linear regression, and the symbols are for the variable values of B/Ec [14]...
Nonlinearity is a subject the specifics of which are not prolifically or extensively discussed as a specific topic in the multivariate calibration literature, to say the least. Textbooks routinely cover the issues of multiple linear regression and nonlinearity, but do not cover the issue with full-spectrum methods such as PCR and PLS. Some discussion does exist relative to multiple linear regression, for example in Chemometrics A Textbook by D.L. Massart et al. [6], see Section 2.1, Linear Regression (pp. 167-175) and Section 2.2, Non-linear Regression, (pp. 175-181). The authors state,... [Pg.165]

Wajima and coauthors offer an alternative approach to utilize animal VD data to predict human VD [13]. Several compound descriptors that included both chemical structural elements as well as animal VDSS values were subject to multiple linear regression and partial least squares statistical analyses, with human VDSS as the independent parameter to be predicted using a dataset of 64 drugs. Methods derived in this manner were compared to simple allometry for overall accuracy. Their analyses yielded the following regressions ... [Pg.478]

Once it has been established that a significant relationship exists between a pair of variables, the next step is to quantify this relationship. The usual approach is to fit a curve to the data points. The simplest curve is a straight line, which implies that a linear relationship exists between the two variables. Unless there is some theoretical reason for expecting a nonlinear relationship, the safest approach is always to use a linear regression, and not to attempt fitting a higher order curve, just because it looks better. [Pg.316]

Fig. 4.3.5. Calibration graphs of LAS and SPC. For each compound the slope of the fitted linear regression and the correlation coefficient are given behind the substance name. The concentrations of the IAS homologues represent the sum of the concentration of the four components, i.e. for comparison of the ionisation efficiencies the value of the slope has to be multiplied by the relative... Fig. 4.3.5. Calibration graphs of LAS and SPC. For each compound the slope of the fitted linear regression and the correlation coefficient are given behind the substance name. The concentrations of the IAS homologues represent the sum of the concentration of the four components, i.e. for comparison of the ionisation efficiencies the value of the slope has to be multiplied by the relative...
As stated earlier, Matlab s philosophy is to read everything as a matrix. Consequently, the basic operators for multiplication, right division, left division, power (, /,, A) automatically perform corresponding matrix operations (A will be introduced shortly in the context of square matrices, / and will be discussed later, in the context of linear regression and the calculation of a pseudo inverse, see The Pseudo-Inverse, p.117). [Pg.19]

The following few equations recapitulate the relationship between our original least-squares formulas, introduced in Chapter 4.3, Non-Linear Regression, and those developed here. The first derivatives are ... [Pg.203]

Figures 2 to 9 show the solubility values used in the linear regressions and three curves. The curves were calculated from Battino s recommended equation for data between 273-350 K (solid line ), the tentative three constant equa-... Figures 2 to 9 show the solubility values used in the linear regressions and three curves. The curves were calculated from Battino s recommended equation for data between 273-350 K (solid line ), the tentative three constant equa-...
Fig. 6.1 Plot of logio/li + 4Z) vs. 7 for Eq. (6.13). The straight hne shows the result of the weighted linear regression, and the area between the dashed lines represents the uncertainty range of logioi i and As. Fig. 6.1 Plot of logio/li + 4Z) vs. 7 for Eq. (6.13). The straight hne shows the result of the weighted linear regression, and the area between the dashed lines represents the uncertainty range of logioi i and As.
ISO 8466 describes how to perform calibration. Part 1 is covering the linear regression and part 2 the second order calibration strategy. [Pg.186]

Equations containing a number of solvent parameters in linear or multiple linear regression and expressing the effect of the solvent on the rate of the reaction or the thermodynamic equilibrium constant. See Ej Values Kamlet-Taft Solvent Parameter Koppel-Palm Solvent Parameter Z Value... [Pg.426]

Mg 2 (°px) has been investigated by Wang et al. (2005) for an orthopyroxene sample with Fe/(Fe + Mg) = 0.011. Some data (with 2a errors) are shown in the data table below. Use the data to find the activation energy of the reaction and the Arrhenius expression of k. Use (i) simple linear regression and (ii) the best linear regression method that you can find (the best is the York program). [Pg.91]

Ordinary Multiple Linear Regression and Principal Components Regression... [Pg.160]


See other pages where Linear regression and is mentioned: [Pg.323]    [Pg.117]    [Pg.133]    [Pg.61]    [Pg.134]    [Pg.210]    [Pg.542]    [Pg.543]    [Pg.545]    [Pg.575]    [Pg.577]    [Pg.150]    [Pg.619]    [Pg.47]    [Pg.59]    [Pg.289]    [Pg.255]    [Pg.106]    [Pg.107]    [Pg.185]    [Pg.169]    [Pg.139]    [Pg.181]    [Pg.113]    [Pg.54]    [Pg.241]   
See also in sourсe #XX -- [ Pg.117 , Pg.118 , Pg.119 , Pg.120 , Pg.121 , Pg.122 , Pg.123 , Pg.124 , Pg.125 , Pg.126 ]




SEARCH



Algebra and Multiple Linear Regression Part

Algebra and Multiple Linear Regression Part 4 - Concluding Remarks

Linear Regression and Calibration

Linear and Nonlinear Regression Functions

Linear regression

Multiple linear regression and partial least squares

Multiple linear regression and principal

Parametric Statistics in Linear and Multiple Regression

The Method of Least Squares and Simple Linear Regression

© 2024 chempedia.info