Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ordinary least-squares

OLS is the method that most novice modelers are familiar with. The model is as given in Eq. [13]. N objects measured on p variables find estimates of the regression coefficients, i.e., po Pi where the hats denote values calculated from the data. The technique that obtains these coefficients is generally the method of least squares (see, e.g., p. 218 of Dillon and Goldstein ), and the solution can be written succinctly in matrix form as [Pg.311]

The error terms in Eq. [13] are independently and identically distributed with a normal distribution having mean of zero and a variance of 1. Statisticians write this simply as e, are i.i.d. N(0,1). This assumption allows, for example, the usual -tests to be performed. [Pg.311]

The p descriptor variables were chosen without recourse to using the information in the response variable i.e., they were chosen in an unsupervised manner. This point is crucial as it allows us to use the usual E-test and corresponding p-values to assess the significance of the regression. There will be much more on this later in the chapter. [Pg.311]

Measurement errors in the descriptor variables are much smaller than in the response variables. [Pg.311]


The ordinary least-squares estimate (OLSE) 0 of 0 minimizes (globally over 0 6 0)... [Pg.79]

One of the earliest interpretations of latent vectors is that of lines of closest fit [9]. Indeed, if the inertia along v, is maximal, then the inertia from all other directions perpendicular to v, must be minimal. This is similar to the regression criterion in orthogonal least squares regression which minimizes the sum of squared deviations which are perpendicular to the regression line (Section 8.2.11). In ordinary least squares regression one minimizes the sum of squared deviations from the regression line in the direction of the dependent measurement, which assumes that the independent measurement is without error. Similarly, the plane formed by v, and Vj is a plane of closest fit, in the sense that the sum of squared deviations perpendicularly to the plane is minimal. Since latent vectors v, contribute... [Pg.106]

We have seen that PLS regression (covariance criterion) forms a compromise between ordinary least squares regression (OLS, correlation criterion) and principal components regression (variance criterion). This has inspired Stone and Brooks [15] to devise a method in such a way that a continuum of models can be generated embracing OLS, PLS and PCR. To this end the PLS covariance criterion, cov(t,y) = s, s. r, is modified into a criterion T = r. (For... [Pg.342]

Ordinary least squares regression of MV upon MX produces a slope of 9.32 and an intercept of 2.36. From these we derive the parameters of the simple Michaelis-Menten reaction (eq. (39.116)) ... [Pg.504]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

Ordinary least squares Linear projection Fixed shape, linear a, maximum squared correlation between projected inputs and output 0, minimum output prediction error... [Pg.34]

In analytical practice, linear calibration by ordinary least squares is mostly used. Therefore, the estimates are summarized before the uncertainties of the estimates will be given ... [Pg.159]

The linearity of a method is defined as its ability to provide measurement results that are directly proportional to the concentration of the analyte, or are directly proportional after some type of mathematical transformation. Linearity is usually documented as the ordinary least squares (OLS) curve, or simply as the linear regression curve, of the measured instrumental responses (either peak area or height) as a function of increasing analyte concentration [22, 23], The use of peak areas is preferred as compared to the use of peak heights for making the calibration curve [24],... [Pg.249]

Regression can be performed directly with the values of the variables (ordinary least-squares regression, OLS) but in the most powerful methods, such as principal component regression (PCR) and partial least-squares regression (PLS), it is done via a small set of intermediate linear latent variables (the components). This approach has important advantages ... [Pg.118]

For only one x-variable and one y-variable—simple x/y-regression—the basic equations are summarized in Section 4.3.1. Ordinary least-squares (OLS) regression is the classical method, but also a number of robust... [Pg.119]

Stone, M., Brooks, R. J. J. R. Statist. Soc. B. 52, 1990, 237-269. Continuum regression Cross-validated sequentially constructed prediction embracing ordinary least squares, partial least squares and principal component regression. [Pg.207]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

The transformed response values were regressed on the transformed amount values using the simple linear regression model and ordinary least squares estimation. The standard deviation of the response values (about the regression line) was calculated, and plots were formed of the transformed response values and of the residuals versus transformed amounts. [Pg.136]

This situation shows two problems The application of ordinary least squares estimation, which requires constant variance, is not appropriate with untreated data. Then, the large variance of the largest numbers in such data excessively controls the direction or slope of the graph. [Pg.144]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]

In the past few years, PLS, a multiblock, multivariate regression model solved by partial least squares found its application in various fields of chemistry (1-7). This method can be viewed as an extension and generalization of other commonly used multivariate statistical techniques, like regression solved by least squares and principal component analysis. PLS has several advantages over the ordinary least squares solution therefore, it becomes more and more popular in solving regression models in chemical problems. [Pg.271]

Using data shown in Figure 13.4, we used ordinary least squares to estimate the effect of the probability of survival to age 65 on life expectancy at birth and found a significantly positive association between these two measures a 10% increase in probability of survival to age 65 was associated with a 1.3% increase in life expectancy. This result, combined with the estimates in Tables 13.2 and 13.3, implies that a 10% increase in the stock of pharmaceutical innovation would lead to an increase in life expectancy at birth by 0.10% (i.e., 0.8% X 1.3%) to 0.18% (1.4% x 1.3%). [Pg.255]

Traditionally, the determination of a difference in costs between groups has been made using the Student s r-test or analysis of variance (ANOVA) (univariate analysis) and ordinary least-squares regression (multivariable analysis). The recent proposal of the generalized linear model promises to improve the predictive power of multivariable analyses. [Pg.49]


See other pages where Ordinary least-squares is mentioned: [Pg.888]    [Pg.714]    [Pg.89]    [Pg.74]    [Pg.78]    [Pg.91]    [Pg.127]    [Pg.342]    [Pg.371]    [Pg.582]    [Pg.598]    [Pg.33]    [Pg.33]    [Pg.26]    [Pg.157]    [Pg.158]    [Pg.53]    [Pg.225]    [Pg.225]    [Pg.133]    [Pg.134]    [Pg.164]    [Pg.203]    [Pg.219]    [Pg.300]    [Pg.308]    [Pg.275]    [Pg.255]   
See also in sourсe #XX -- [ Pg.78 , Pg.79 , Pg.80 , Pg.81 , Pg.82 , Pg.83 , Pg.92 , Pg.101 ]

See also in sourсe #XX -- [ Pg.93 ]

See also in sourсe #XX -- [ Pg.53 ]

See also in sourсe #XX -- [ Pg.58 , Pg.59 , Pg.62 , Pg.69 , Pg.82 , Pg.125 , Pg.127 , Pg.133 , Pg.135 , Pg.145 , Pg.148 , Pg.361 ]

See also in sourсe #XX -- [ Pg.310 , Pg.311 ]

See also in sourсe #XX -- [ Pg.71 ]

See also in sourсe #XX -- [ Pg.56 ]

See also in sourсe #XX -- [ Pg.445 ]




SEARCH



Ordinary least squares , input-output

Ordinary least squares regression independent variables

Ordinary least squares regression values, responses

Ordinary least-squares estimated using

Ordinary least-squares linear regression coefficients

Ordinary least-squares method

Ordinary least-squares regression analysis

Ordinary least-squares weights

Regression ordinary least squares

Trend Evaluation with Ordinary Least Squares Regression

Useful Formulae for Ordinary, Least-Squares Regression

Weighting ordinary least-squares weights

© 2024 chempedia.info