Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression parameters

A researcher is performing a steam-heat thermal-death curve calculation on a 10 microbial population of Bacillus stearothermophilus, where the steam [Pg.27]

FIGURE 2.5 Steam-heat thermal-death curve calculation forfi. stearothermophilus. [Pg.28]


Table 8-14. Regression Parameters for Tosylate Solvolysis Reactions"... Table 8-14. Regression Parameters for Tosylate Solvolysis Reactions"...
The quintessential statistical operation in analytical chemistry consists in estimating, from a calibration curve, the concentration of an analyte in an unknown sample. If the regression parameters a and b, and the unknown s analytical response y are known, the most likely concentration is given by Eq. (2.19), y being the average of all repeat determinations on the unknown. [Pg.108]

For the weighted regression the standard deviation was modeled as i(x) = 100 + 5 x this information stems from experience with the analytical technique. Intermediate results and regression parameters are given in Tables 2.13 and 2.14. Table 2.15 details the contributions the individual residuals make. [Pg.124]

The linear regression parameters are calculated independently for each set. [Pg.374]

The process rnust be iterated until convergence and the final estimates are denoted with Plb, bi,LB, and colb- The individual regression parameter can be therefore estimated by replacing the final fixed effects and random effects estimates in the function g so that ... [Pg.99]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

By way of illustration, the regression parameters of a straight line with slope = 1 and intercept = 0 are recursively estimated. The results are presented in Table 41.1. For each step of the estimation cycle, we included the values of the innovation, variance-covariance matrix, gain vector and estimated parameters. The variance of the experimental error of all observations y is 25 10 absorbance units, which corresponds to r = 25 10 au for all j. The recursive estimation is started with a high value (10 ) on the diagonal elements of P and a low value (1) on its off-diagonal elements. [Pg.580]

Weighted regression of U- " U- °Th- Th isotope data on three or more coeval samples provides robust estimates of the isotopic information required for age calculation. Ludwig (2003) details the use of maximum likelihood estimation of the regression parameters in either coupled XY-XZ isochrons or a single three dimensional XYZ isochron, where A, Y and Z correspond to either (1) U/ Th, °Th/ Th and... [Pg.414]

The general pre-conditions for the estimation of regression parameters (regression coefficients a and b as well as the uncertainties of the model and the relevant estimates) are the following ... [Pg.155]

Nonlinear calibration is carried out by nonlinear regression where two types have to be distinguished (1) real (intrinsic) nonlinear regression and (2) quasilinear (intrinsic linear) regression. The latter is characterized by the fact that only the data but not the regression parameters are nonlinear. Typical examples are polynomials... [Pg.177]

Methanol and water mixtures were employed as mobile phases, the methanol concentration varying between 20-95 per cent in steps of 5 per cent. Linear correlations were calculated separately for each synthetic dye between the methanol concentration in the mobile phase and the Ru values. The regression parameters are compiled in Table 3.8. It was found that the regression paramters of synthetic dyes show high differences,... [Pg.381]

Both assumptions are mainly needed for constructing confidence intervals and tests for the regression parameters, as well as for prediction intervals for new observations in x. The assumption of normal distribution additionally helps avoid skewness and outliers, mean 0 guarantees a linear relationship. The constant variance, also called homoscedasticity, is also needed for inference (confidence intervals and tests). This assumption would be violated if the variance of y (which is equal to the residual variance a2, see below) is dependent on the value of x, a situation called heteroscedasticity, see Figure 4.8. [Pg.135]

There are various procedures to estimate the unknown regression parameters and the parameters for the kernel functions. One approach is to estimate prototypes ntj and scale parameters Sj separately by clustering methods, and then to estimate the regression parameters, however, this approach does not incorporate information of the y-variable. Another approach is to use optimization techniques to minimize the RSS for the residuals y, - fix,) obtained via Equation 4.95, for i = 1,..., n. [Pg.184]

The denominator in Equation 4.100 is the (absolute) size of all Lasso regression parameters for a particular choice of AL (compare Equation 4.89), and the nominator describes the maximal possible (absolute) size of the Lasso regression parameters (in case there is no singularity problem this would correspond to the OLS solution). The optimal choice is at a fraction of 0.3 which corresponds to a MSEPCy of 63.4 and to a SEPCv of 7.7. [Pg.196]

An advantage of LR in comparison to LDA is the fact that statistical inference in the form of tests and confidence intervals for the regression parameters can be derived (compare Section 4.3). It is thus possible to test whether the /th regression coefficient bj = 0. If the hypothesis can be rejected, the jth regressor variable xj... [Pg.222]

In Hitchell s work unequal variance of the response data was compensated for by weighting the data by the variance at each level. The regression parameters and the confidence band around the regression line were estimated by least squares ( ) The overall level of uncertainty, OL, was divided between the variation in response values and the variance in the regression estimation. His overall a was 0.05. The prediction interval was estimated around a single response determination. [Pg.184]

Particulate-Reflectance Functions. Regression parameters for particulate-reflectance relationships are shown in Table III. [Pg.73]


See other pages where Regression parameters is mentioned: [Pg.110]    [Pg.118]    [Pg.198]    [Pg.97]    [Pg.545]    [Pg.324]    [Pg.337]    [Pg.367]    [Pg.576]    [Pg.577]    [Pg.579]    [Pg.581]    [Pg.582]    [Pg.600]    [Pg.40]    [Pg.140]    [Pg.319]    [Pg.162]    [Pg.62]    [Pg.374]    [Pg.376]    [Pg.378]    [Pg.133]    [Pg.134]    [Pg.138]    [Pg.145]    [Pg.169]    [Pg.177]    [Pg.181]    [Pg.181]    [Pg.184]    [Pg.194]    [Pg.236]    [Pg.75]   
See also in sourсe #XX -- [ Pg.327 ]

See also in sourсe #XX -- [ Pg.37 ]




SEARCH



© 2024 chempedia.info