Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Regression Model

The ability to measure simultaneously the isotopes of two different elements with high precision has never been a particular challenge in mass spectrometry. The challenge has always been to ensure the accuracy of the mass bias transfer. Hence a fundamental limitation of the traditional mass bias correction models is their [Pg.124]

Note that the above expressions are not merely definitions, as they also imphcate the linearity in the isotope ratio response. In other words, the correction factors K) do not depend on the magnitude of the isotope amount ratio (r). In the logarithmic scale, these two expressions become [Pg.125]

From this, variable rearrangement leads to the following  [Pg.125]

This expression forms a log-linear regression between the measured isotope amount ratios of the measurand and the calibrant. The intercept (a) and the slope (b) of this regression are [Pg.125]

Since the values of the intercept (a) and slope (b) are known from best-fitting a straight line as provided by the least-squares method and the value of R i is known before the experiment, the isotope amount ratio of the measurand can be calculated [32]  [Pg.125]


Plot of the residual error in y as a function of X. The distribution of the residuals in (a) indicates that the regression model was appropriate for the data, and the distributions in (b) and (c) indicate that the model does not provide a good fit for the data. [Pg.124]

The regression models considered earlier apply only to functions containing a single independent variable. Analytical methods, however, are frequently subject to determinate sources of error due to interferents that contribute to the measured signal. In the presence of a single interferent, equations 5.1 and 5.2 become... [Pg.127]

The second task discussed is the validation of the regression models with the aid of the cross-validation (CV) procedures. The leave-one-out (LOO) as well as the leave-many-out CV methods are used to evaluate the prognostic possibilities of QSAR. In the case of noisy and/or heterogeneous data the LM method is shown to exceed sufficiently the LS one with respect to the suitability of the regression models built. The especially noticeable distinctions between the LS and LM methods are demonstrated with the use of the LOO CV criterion. [Pg.22]

Eigure 1-6 shows plots of the regression model and the experimental results. Equation 1-218 can now be expressed as ... [Pg.49]

The figures obtained are shown in the table below, which as well as giving the values obtained, includes the estimations from the regression model with their confidence interval at 95% ... [Pg.70]

In case of fast gradient (below 15 min), S could be considered constant for all the investigated molecules and wiU only have a small influence on the retention time of the compounds. Thus, the gradient retention times, of a calibration set of compounds are linearly related to the ( )o values [39]. Moreover, Valko et al. also demonstrated that the faster the gradient was, the better the correlation between t, and < )o [40]. Once the regression model was established for the calibration standards, Eq. 8 allowed the conversion of gradient retention times to CHI values for any compound in the same gradient system. Results are then suitable for interlaboratory comparison and database construction. The CH I scale (between 0 and 100) can be used as an independent measure of lipophilicity or also easily converted to a log P scale. [Pg.342]

One might suspect that fitting all T-variables simultaneously, i.e. in one overall multivariate regression, might make a difference for the regression model. This is not the case, however. To see this, let us state the multivariate (i.e. two or more dependent variables) regression model as ... [Pg.323]

In recent years there has been much activity to devise methods for multivariate calibration that take non-linearities into account. Artificial neural networks (Chapter 44) are well suited for modelling non-linear behaviour and they have been applied with success in the field of multivariate calibration [47,48]. A drawback of neural net models is that interpretation and visualization of the model is difficult. Several non-linear variants of PCR and PLS regression have been proposed. Conceptually, the simplest approach towards introducing non-linearity in the regression model is to augment the set of predictor variables (jt, X2, ) with their respective squared terms (xf,. ..) and, optionally, their possible cross-product... [Pg.378]

The relation between the calculated amount of IPP excreted in urine resulting from dermal exposure and actual exposure of the hands was studied for both work clothing and protective clothing trials using the regression model ... [Pg.75]

In the error-in-variable method (EVM), measurement errors in all variables are treated in the parameter estimation problem. EVM provides both parameter estimates and reconciled data estimates that are consistent with respect to the model. The regression models are often implicit and undetermined (Tjoa and Biegler, 1992), that is,... [Pg.185]

The y-LOADiNGS qj for components 1 to a are computed through the regression model (Equation 4.63). [Pg.171]

Steps 11-13 are the OLS estimates using the regression models (Equations 4.62 through 4.64). Step 14 performs a deflation of the X and of the Y matrix. The residual matrices Xt and Yi are then used to derive the next PLS components, following the scheme of steps 1-10. Finally, the regression coefficients B from Equation 4.61 linking the y-data with the x-data are obtained by B = Y(P Y) C ... [Pg.173]

FIGURE 4.28 Visualization of kernel regression with two Gaussian kernels. The point sizes reflect the influence on the regression model. The point sizes in the left plot are for the solid kernel function, those in the right plot are for the dashed kernel function. [Pg.184]

VHien this method is used, Table II shows the results when the regression model is the normal first order linear model. Since the maximum absolute studentized residual (Max ASR) found, 2.29, was less than the critical value relative to this model, 2.78, the conclusion is that there are no inconsistent values. [Pg.46]

They were also typical when the regression model chosen was first order. Mean-level bandwidths greater than 20-30% are probably indicative that errors have been made in the analysis process that should not be tolerated. In this case techniques would be carefully scrutinized to find errors, outliers, or changing chromatographic conditions. These should be remedied and the analysis repeated whenever possible. Certain manipulation can be done to reduce the bandwidth values. For example, they would be... [Pg.158]

Cases (2) and (3) reflect an inherent advantage of qualitative models over quantitative ones that it is not necessary to assume a quantitative relationship between the x variables and the y variable. This results in less burden on the regression modeling procedure, and more likelihood of generating an effective model. The disadvantages of qualitative models are that they generate less-specific results, and that they tend to be somewhat more complicated to implement in a real-time enviromnent. [Pg.389]

Each of the regression models is evaluated for prediction abihty, typically using cross validation. [Pg.424]

Predictions of log P with regression. As would be expected, the largest values of the explained variation (r squared) and the smallest standard error of estimates found with the regression models were those that Included all 90 variables. These models... [Pg.154]

The PLS technique gives a stepwise solution for the regression model, which converges to the least squares solution. The final model is the sum of a series of submodels. It can handle multiple response variables, highly correlated predictor variables grouped into several blocks and underdetermined systems, where the number of samples is less than the number of predictor variables. Our model (not including the error terms) is ... [Pg.272]


See other pages where The Regression Model is mentioned: [Pg.891]    [Pg.124]    [Pg.127]    [Pg.80]    [Pg.45]    [Pg.238]    [Pg.330]    [Pg.331]    [Pg.342]    [Pg.363]    [Pg.375]    [Pg.75]    [Pg.131]    [Pg.151]    [Pg.379]    [Pg.327]    [Pg.574]    [Pg.121]    [Pg.136]    [Pg.138]    [Pg.144]    [Pg.144]    [Pg.147]    [Pg.151]    [Pg.156]    [Pg.163]    [Pg.168]    [Pg.173]    [Pg.48]    [Pg.318]    [Pg.151]    [Pg.134]   


SEARCH



Regression model

Regression modeling

© 2024 chempedia.info