Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression matrix least squares

What is the equivalent four-parameter linear model expressing y, as a function of jci and xfl Use matrix least squares (regression analysis) to fit this linear model to the data. How are the classical factor effects and the regression factor effects related. Draw the sums of squares and degrees of freedom tree. How many degrees of freedom are there for SS, 55, and SS 7... [Pg.357]

Dichromate-permanganate determination is an artificial problem because the matrix of coefficients can be obtained as the slopes of A vs. x from four univariate least squares regression treatments, one on solutions containing only at... [Pg.84]

For many applications, quantitative band shape analysis is difficult to apply. Bands may be numerous or may overlap, the optical transmission properties of the film or host matrix may distort features, and features may be indistinct. If one can prepare samples of known properties and collect the FTIR spectra, then it is possible to produce a calibration matrix that can be used to assist in predicting these properties in unknown samples. Statistical, chemometric techniques, such as PLS (partial least-squares) and PCR (principle components of regression), may be applied to this matrix. Chemometric methods permit much larger segments of the spectra to be comprehended in developing an analysis model than is usually the case for simple band shape analyses. [Pg.422]

Classical least-squares (CLS), sometimes known as K-matrix calibration, is so called because, originally, it involved the application of multiple linear regression (MLR) to the classical expression of the Beer-Lambert Law of spectroscopy ... [Pg.51]

Multiple Linear Regression (MLR), Classical Least-Squares (CLS, K-matrix), Inverse Least-Squares (ILS, P-matrix)... [Pg.191]

Partial least squares regression (PLS). Partial least squares regression applies to the simultaneous analysis of two sets of variables on the same objects. It allows for the modeling of inter- and intra-block relationships from an X-block and Y-block of variables in terms of a lower-dimensional table of latent variables [4]. The main purpose of regression is to build a predictive model enabling the prediction of wanted characteristics (y) from measured spectra (X). In matrix notation we have the linear model with regression coefficients b ... [Pg.544]

In multiple linear regression (MLR) we are given an nxp matrix X and an n vector y. The problem is to find an unknown p vector b such that the product y of X with b is as close as possible to the original y using a least squares criterion ... [Pg.53]

The P-matrix is chosen to fit best, in a least-squares sense, the concentrations in the calibration data. This is called inverse regression, since usually we fit a random variable prone to error (y) by something we know and control exactly x). The least-squares estimate P is given by... [Pg.357]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

More on Multiple linear least squares regression (MLLSR), also known as Multiple linear regression (MLR) and P-matrix, and its sibling, K-matrix... [Pg.3]

In this least squares method example the object is to calculate the terms /30, A and /J2 which produce a prediction model yielding the smallest or least squared differences or residuals between the actual analyte value Cj, and the predicted or expected concentration y To calculate the multiplier terms or regression coefficients /3j for the model we can begin with the matrix notation ... [Pg.30]

Equation 69-10, of course, is the same as equation 69-1, and therefore we see that this procedure gives us the least-squares solution to the problem of determining the regression coefficients, and equation 69-1 is, as we said, the matrix equation for the least-squares solution. [Pg.473]

The second critical fact that comes from equation 70-20 can be seen when you look at the Chemometric cross-product matrices used for calibrations (least-squares regression, for example, as we discussed in [1]). What is this cross-product matrix that is often so blithely written in matrix notation as ATA as we saw in our previous chapter Let us write one out (for a two-variable case like the one we are considering) and see ... [Pg.479]

The difference here is that X is a matrix that contains responses from M (>1) different x variables, and b contains M regression coefficients for each of the x variables. As for linear regression, the coefficients for MLR (b) are determined using the least-squares method ... [Pg.361]

The optimal number of components from the prediction point of view can be determined by cross-validation (10). This method compares the predictive power of several models and chooses the optimal one. In our case, the models differ in the number of components. The predictive power is calculated by a leave-one-out technique, so that each sample gets predicted once from a model in the calculation of which it did not participate. This technique can also be used to determine the number of underlying factors in the predictor matrix, although if the factors are highly correlated, their number will be underestimated. In contrast to the least squares solution, PLS can estimate the regression coefficients also for underdetermined systems. In this case, it introduces some bias in trade for the (infinite) variance of the least squares solution. [Pg.275]

Differences between PIS and PCR Principal component regression and partial least squares use different approaches for choosing the linear combinations of variables for the columns of U. Specifically, PCR only uses the R matrix to determine the linear combinations of variables. The concentrations are used when the regression coefficients are estimated (see Equation 5.32), but not to estimate A potential disadvantage with this approach is that variation in R that is not correlated with the concentrations of interest is used to construct U. Sometiraes the variance that is related to the concentrations is a verv... [Pg.146]

Factor The result of a transformation of a data matrix where the goal is to reduce the dimensionality of the data set. Estimating factors is necessary to construct principal component regression and partial least-squares models, as discussed in Section 5.3.2. (See also Principal Component.)... [Pg.186]


See other pages where Regression matrix least squares is mentioned: [Pg.244]    [Pg.426]    [Pg.293]    [Pg.298]    [Pg.244]    [Pg.2309]    [Pg.168]    [Pg.71]    [Pg.131]    [Pg.313]    [Pg.346]    [Pg.353]    [Pg.562]    [Pg.582]    [Pg.24]    [Pg.5]    [Pg.26]    [Pg.186]    [Pg.107]    [Pg.127]    [Pg.204]    [Pg.160]    [Pg.164]    [Pg.151]   
See also in sourсe #XX -- [ Pg.69 , Pg.133 ]




SEARCH



Least matrix

Least squares regression

Matrices square matrix

Matrix least squares

Regression matrix

© 2024 chempedia.info