Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ordinary, Least-Squares Regression

We could write the regression as y, = (a + 7) + Px, + (e, - 7) = a + Px, + s,. Then, we know that E[z ] = 0, and that it is independent of x,. Therefore, the second form of the model satisfies all of our assumptions for the classical regression. Ordinary least squares will give unbiased estimators of a and p. As long as 7 is not zero, the constant term will differ from a. [Pg.8]

One of the earliest interpretations of latent vectors is that of lines of closest fit [9]. Indeed, if the inertia along v, is maximal, then the inertia from all other directions perpendicular to v, must be minimal. This is similar to the regression criterion in orthogonal least squares regression which minimizes the sum of squared deviations which are perpendicular to the regression line (Section 8.2.11). In ordinary least squares regression one minimizes the sum of squared deviations from the regression line in the direction of the dependent measurement, which assumes that the independent measurement is without error. Similarly, the plane formed by v, and Vj is a plane of closest fit, in the sense that the sum of squared deviations perpendicularly to the plane is minimal. Since latent vectors v, contribute... [Pg.106]

We have seen that PLS regression (covariance criterion) forms a compromise between ordinary least squares regression (OLS, correlation criterion) and principal components regression (variance criterion). This has inspired Stone and Brooks [15] to devise a method in such a way that a continuum of models can be generated embracing OLS, PLS and PCR. To this end the PLS covariance criterion, cov(t,y) = s, s. r, is modified into a criterion T = r. (For... [Pg.342]

Ordinary least squares regression of MV upon MX produces a slope of 9.32 and an intercept of 2.36. From these we derive the parameters of the simple Michaelis-Menten reaction (eq. (39.116)) ... [Pg.504]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

The linearity of a method is defined as its ability to provide measurement results that are directly proportional to the concentration of the analyte, or are directly proportional after some type of mathematical transformation. Linearity is usually documented as the ordinary least squares (OLS) curve, or simply as the linear regression curve, of the measured instrumental responses (either peak area or height) as a function of increasing analyte concentration [22, 23], The use of peak areas is preferred as compared to the use of peak heights for making the calibration curve [24],... [Pg.249]

Regression can be performed directly with the values of the variables (ordinary least-squares regression, OLS) but in the most powerful methods, such as principal component regression (PCR) and partial least-squares regression (PLS), it is done via a small set of intermediate linear latent variables (the components). This approach has important advantages ... [Pg.118]

For only one x-variable and one y-variable—simple x/y-regression—the basic equations are summarized in Section 4.3.1. Ordinary least-squares (OLS) regression is the classical method, but also a number of robust... [Pg.119]

Stone, M., Brooks, R. J. J. R. Statist. Soc. B. 52, 1990, 237-269. Continuum regression Cross-validated sequentially constructed prediction embracing ordinary least squares, partial least squares and principal component regression. [Pg.207]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

The transformed response values were regressed on the transformed amount values using the simple linear regression model and ordinary least squares estimation. The standard deviation of the response values (about the regression line) was calculated, and plots were formed of the transformed response values and of the residuals versus transformed amounts. [Pg.136]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]

In the past few years, PLS, a multiblock, multivariate regression model solved by partial least squares found its application in various fields of chemistry (1-7). This method can be viewed as an extension and generalization of other commonly used multivariate statistical techniques, like regression solved by least squares and principal component analysis. PLS has several advantages over the ordinary least squares solution therefore, it becomes more and more popular in solving regression models in chemical problems. [Pg.271]

Traditionally, the determination of a difference in costs between groups has been made using the Student s r-test or analysis of variance (ANOVA) (univariate analysis) and ordinary least-squares regression (multivariable analysis). The recent proposal of the generalized linear model promises to improve the predictive power of multivariable analyses. [Pg.49]

Regression analysis often is used to assess differences in costs, in part because the sample size needed to detect economic differences may be larger than the sample needed to detect clinical differences (i.e., to overcome power problems). Traditionally, ordinary least-squares regression has been used to predict costs (or their log) as a function of the treatment group while controlling for covariables such as... [Pg.50]

Table 4 shows selected results of an ordinary least-squares regression predicting hospital costs... [Pg.50]

Regression techniques provide models for quantitative predictions. The ordinary least squares (OLS) method is probably the most used and studied historically. Nevertheless, it presents a number of restrictions which often limit its applicability in the case of artificial tongue data. [Pg.93]

Multiple regression, also called ordinary least squares, can frequently provide reasonably precise, reliable estimates of coefficients even if the data analyzed are unbalanced,... [Pg.301]

I Ordinary least squares regression Weighting variable = none ... [Pg.12]


See other pages where Ordinary, Least-Squares Regression is mentioned: [Pg.8]    [Pg.87]    [Pg.8]    [Pg.87]    [Pg.714]    [Pg.89]    [Pg.342]    [Pg.371]    [Pg.582]    [Pg.33]    [Pg.26]    [Pg.133]    [Pg.134]    [Pg.164]    [Pg.203]    [Pg.219]    [Pg.300]    [Pg.308]    [Pg.275]    [Pg.255]    [Pg.50]    [Pg.79]    [Pg.183]    [Pg.400]    [Pg.674]    [Pg.309]    [Pg.183]    [Pg.234]    [Pg.10]    [Pg.10]   
See also in sourсe #XX -- [ Pg.29 , Pg.35 ]

See also in sourсe #XX -- [ Pg.231 , Pg.232 , Pg.235 , Pg.243 , Pg.245 , Pg.252 , Pg.253 ]




SEARCH



Least squares regression

Ordinary least squares

© 2024 chempedia.info