Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least squares methods regression

If the rate law depends on the concentration of more than one component, and it is not possible to use the method of one component being in excess, a linearized least squares method can be used. The purpose of regression analysis is to determine a functional relationship between the dependent variable (e.g., the reaction rate) and the various independent variables (e.g., the concentrations). [Pg.171]

The purpose of Partial Least Squares (PLS) regression is to find a small number A of relevant factors that (i) are predictive for Y and (u) utilize X efficiently. The method effectively achieves a canonical decomposition of X in a set of orthogonal factors which are used for fitting Y. In this respect PLS is comparable with CCA, RRR and PCR, the difference being that the factors are chosen according to yet another criterion. [Pg.331]

Partial Least Squares (PLS) regression (Section 35.7) is one of the more recent advances in QSAR which has led to the now widely accepted method of Comparative Molecular Field Analysis (CoMFA). This method makes use of local physicochemical properties such as charge, potential and steric fields that can be determined on a three-dimensional grid that is laid over the chemical stmctures. The determination of steric conformation, by means of X-ray crystallography or NMR spectroscopy, and the quantum mechanical calculation of charge and potential fields are now performed routinely on medium-sized molecules [10]. Modem optimization and prediction techniques such as neural networks (Chapter 44) also have found their way into QSAR. [Pg.385]

The method of PCA can be used in QSAR as a preliminary step to Hansch analysis in order to determine the relevant parameters that must be entered into the equation. Principal components are by definition uncorrelated and, hence, do not pose the problem of multicollinearity. Instead of defining a Hansch model in terms of the original physicochemical parameters, it is often more appropriate to use principal components regression (PCR) which has been discussed in Section 35.6. An alternative approach is by means of partial least squares (PLS) regression, which will be more amply covered below (Section 37.4). [Pg.398]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

This is the situation we must handle. We cannot simply ignore one or more of these equations arbitrarily dealing with them properly has become known variously as the Least Squares method, Multiple Least Squares, or Multiple Linear Regression. [Pg.30]

In this least squares method example the object is to calculate the terms /30, A and /J2 which produce a prediction model yielding the smallest or least squared differences or residuals between the actual analyte value Cj, and the predicted or expected concentration y To calculate the multiplier terms or regression coefficients /3j for the model we can begin with the matrix notation ... [Pg.30]

Apart from pharmacophore-based approaches, a variety of methods were applied to decipher important ligand features of PXR activation. VolSurf descriptor-based partial least squares (PLS) regression-based models pointed toward amide responsive regions that implicated good acceptor abilities as key variables [33]. [Pg.324]

For only one x-variable and one y-variable—simple x/y-regression—the basic equations are summarized in Section 4.3.1. Ordinary least-squares (OLS) regression is the classical method, but also a number of robust... [Pg.119]

The variable includes many and severe outliers. Outliers in data can distort least-square-based regression methods, and usually it is not possible to replace outliers by meaningful data values. [Pg.153]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

Given the matching sets of measured data, x and y, it is now possible to estimate the model regression coefficients b. Assuming that the model errors (values in f) are Gaussian-distributed, it can be proven that the value of b that minimizes the sum of squares of the model errors is determined using the least squares method ... [Pg.360]

The difference here is that X is a matrix that contains responses from M (>1) different x variables, and b contains M regression coefficients for each of the x variables. As for linear regression, the coefficients for MLR (b) are determined using the least-squares method ... [Pg.361]

PLS (partial least squares) multiple regression technique is used to estimate contributions of various polluting sources in ambient aerosol composition. The characteristics and performance of the PLS method are compared to those of chemical mass balance regression model (CMB) and target transformation factor analysis model (TTFA). Results on the Quail Roost Data, a synthetic data set generated as a basis to compare various receptor models, is reported. PLS proves to be especially useful when the elemental compositions of both the polluting sources and the aerosol samples are measured with noise and there is a high correlation in both blocks. [Pg.271]

Lowdin, P. O. (1992) On linear algebra, the least square method, and the search for linear relations by regression analysis in quantum chemistry and other sciences. Adv. Quantum Chem. 23, 83-126. [Pg.47]

The method which satisfies these conditions is partial least squares (PLS) regression analysis, a relatively recent statistical technique (18, 19). The basis of tiie PLS method is that given k objects, characterised by i descriptor variables, which form the X-matrix, and j response variables which form the Y-matrix, it is possible to relate the two blocks (or data matrices) by means of the respective latent variables u and 1 in such a way that the two data sets are linearly dependent ... [Pg.103]

Another approach is to prepare a stock solution of high concentration. Linearity is then demonstrated directly by dilution of the standard stock solution. This is more popular and the recommended approach. Linearity is best evaluated by visual inspection of a plot of the signals as a function of analyte concentration. Subsequently, the variable data are generally used to calculate a regression line by the least-squares method. At least five concentration levels should be used. Under normal circumstances, linearity is acceptable with a coefficient of determination (r2) of >0.997. The slope, residual sum of squares, and intercept should also be reported as required by ICH. [Pg.735]

In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]


See other pages where Least squares methods regression is mentioned: [Pg.244]    [Pg.168]    [Pg.327]    [Pg.366]    [Pg.162]    [Pg.183]    [Pg.224]    [Pg.278]    [Pg.408]    [Pg.503]    [Pg.5]    [Pg.610]    [Pg.298]    [Pg.172]    [Pg.211]    [Pg.180]    [Pg.201]    [Pg.158]    [Pg.16]    [Pg.47]    [Pg.164]    [Pg.211]    [Pg.198]    [Pg.75]    [Pg.324]    [Pg.66]    [Pg.275]    [Pg.229]    [Pg.520]    [Pg.624]    [Pg.207]    [Pg.76]    [Pg.104]   


SEARCH



Analytical methods partial least squares regression

Classical least-squares regression method

Least squares regression

Least-squared method

Least-squares method

Method of least squares regression

Numerical Curve Fitting The Method of Least Squares (Regression)

Partial least-squares regression method

Regression analysis linear least squares method

Regression analysis nonlinear least squares method

Regression methods

The Method of Least Squares (Regression)

The Method of Least Squares and Simple Linear Regression

© 2024 chempedia.info