Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ordinary least-squares method

The goodness of fit of PLS models is calculated as an error of the prediction, in a manner similar to the description in ordinary least squares methods. Using the so-called cross-validation test one can determine the number of significant vectors in U and T and also the error of prediction. [Pg.200]

Ridge regression analysis is used when the independent variables are highly interrelated, and stable estimates for the regression coefficients cannot be obtained via ordinary least squares methods (Rozeboom, 1979 Pfaffenberger and Dielman, 1990). It is a biased estimator that gives estimates with small variance, better precision and accuracy. [Pg.169]

With this transformation, the linear regression model, using the ordinary least-squares method of determination, is valid. However, to employ it, we need to know the population serial correlation coefficient, P. We estimate it by r. The population Equation 3.9 through Equation 3.11 will be changed to population estimates ... [Pg.125]

Least-squared methods used to derive the current interbank curve are very similar to those used to derive the current nondefault Treasury curve. After converting market data into equivalent zero-coupon rates, the zero-coupon yield curve is derived using a two-stage process, first writing zero-coupon rates as a B-spline function, and then fitting them through an ordinary least-squared method. [Pg.756]

The ordinary least squares method is the simplest and most commonly used to fit theoretical equations to experimental data and it is based on three basic premises ... [Pg.76]

We have seen that PLS regression (covariance criterion) forms a compromise between ordinary least squares regression (OLS, correlation criterion) and principal components regression (variance criterion). This has inspired Stone and Brooks [15] to devise a method in such a way that a continuum of models can be generated embracing OLS, PLS and PCR. To this end the PLS covariance criterion, cov(t,y) = s, s. r, is modified into a criterion T = r. (For... [Pg.342]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

The linearity of a method is defined as its ability to provide measurement results that are directly proportional to the concentration of the analyte, or are directly proportional after some type of mathematical transformation. Linearity is usually documented as the ordinary least squares (OLS) curve, or simply as the linear regression curve, of the measured instrumental responses (either peak area or height) as a function of increasing analyte concentration [22, 23], The use of peak areas is preferred as compared to the use of peak heights for making the calibration curve [24],... [Pg.249]

Regression can be performed directly with the values of the variables (ordinary least-squares regression, OLS) but in the most powerful methods, such as principal component regression (PCR) and partial least-squares regression (PLS), it is done via a small set of intermediate linear latent variables (the components). This approach has important advantages ... [Pg.118]

For only one x-variable and one y-variable—simple x/y-regression—the basic equations are summarized in Section 4.3.1. Ordinary least-squares (OLS) regression is the classical method, but also a number of robust... [Pg.119]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

Therefore, uniaxially oriented samples should be prepared for this purpose, which give so-called fiber pattern in X-ray diffraction. The diffraction intensities from the PPX specimen of P-form, which had been elongated 6 times at 285°C, were measured by an ordinary photographic method. The reflections were indexed on the basis of the lattice constants a=ft=2.052nm, c(chain axis)=0.655nm, a=P=90°, and y=120°. Inseparable reflections were used in the lump in the computation by the least square method. [Pg.466]

In the past few years, PLS, a multiblock, multivariate regression model solved by partial least squares found its application in various fields of chemistry (1-7). This method can be viewed as an extension and generalization of other commonly used multivariate statistical techniques, like regression solved by least squares and principal component analysis. PLS has several advantages over the ordinary least squares solution therefore, it becomes more and more popular in solving regression models in chemical problems. [Pg.271]

Regression techniques provide models for quantitative predictions. The ordinary least squares (OLS) method is probably the most used and studied historically. Nevertheless, it presents a number of restrictions which often limit its applicability in the case of artificial tongue data. [Pg.93]

In the ordinary weighted least squares method, the most probable values of source contributions are achieved by minimizing the weighted sum of squares of the difference between the measured values of the ambient concentration and those calculated from Equation 1 weighted by the analytical uncertainty of those ambient measurements. This solution provides the added benefit of being able to propagate the measured uncertainty of the ambient concentrations through the calculations to come up with a confidence interval around the calculated source contributions. [Pg.92]

Is there any problem with ordinary least squares Using the method you have described, fit the model above to the data in Table F5.1. Report your results. [Pg.97]

Because the model has both lagged dependent variables and autocorrelated disturbances, ordinary least squares will be inconsistent. Consistent estimates could be obtained by the method of instrumental variables. We can use x and Xt 2 as the instruments for yy and yt.2. Efficient estimates can be obtained by a two step procedure. We write the model asy, - pyM = a(l-p) + P(xy - pxM) + y(y, i - py, 2) + 5(yt 2 - pty3) + With a consistent estimator of p, we could use FGLS. The residuals from the IV estimator can be used to estimate p. Then OLS using the transformed data is asymptotically equivalent to GLS. The method of Hatanaka discussed in the text is another possibility. [Pg.97]

Describe a method of estimating the parameters. Is ordinary least squares consistent ... [Pg.98]

Examples are ordinary least squares (OLS) and classical least squares (CLS). Explicit methods provide transparent models with easily interpretable results. However, highly controlled experimental conditions, high-quality spectra, and accurate concentration measurements of all components in the sample matrix may be difficult to obtain, particularly in biomedical applications. [Pg.337]

Explicit Calibration Methods Ordinary least squares can be employed if spectra are known for all components in the sample matrix. For a pure component spectral matrix, P, the regression matrix, B, can be obtained by the pseudoinverse of P from equation (12.2) ... [Pg.337]

Multiple Pass Analysis. Pike and coworkers (13) have provided a method to increase the resolution of the ordinary least squares algorithm somewhat. It was noted that any reasonable set of assumed particle sizes constitutes a basis set for the inversion (within experimental error). Thus, the data can be analyzed a number of times with a different basis set each time, and the results combined. A statistically more-probable solution results from an average of the several equally-likely solutions. Although this "multiple pass analysis".helps locate the peaks of the distribution with better resolution and provides a smoother presentation of the result, it can still only provide limited resolution without the use of a non-negatively constrained least squares technique. We have shown, however, that the combination of both the non-negatively constrained calculation and the multiple pass analysis gives the advantages of both. [Pg.92]

Brenneman and Nair (2001) proposed a strategy that combines their modified version of Harvey s method with joint location and dispersion modeling for a log-linear dispersion model. After fitting a location model with ordinary least squares regression, they recommended an initial check to see if there are sufficient degrees of freedom even to consider looking for dispersion effects. The condition they... [Pg.39]


See other pages where Ordinary least-squares method is mentioned: [Pg.89]    [Pg.91]    [Pg.33]    [Pg.134]    [Pg.164]    [Pg.219]    [Pg.275]    [Pg.50]    [Pg.400]    [Pg.179]    [Pg.161]    [Pg.183]    [Pg.234]    [Pg.98]    [Pg.205]    [Pg.83]    [Pg.92]    [Pg.102]    [Pg.107]    [Pg.33]    [Pg.34]    [Pg.181]    [Pg.3496]    [Pg.63]    [Pg.59]   
See also in sourсe #XX -- [ Pg.257 ]




SEARCH



Least-squared method

Least-squares method

Ordinary least squares

© 2024 chempedia.info