Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-squares solution

To produce a calibration using classical least-squares, we start with a training set consisting of a concentration matrix, C, and an absorbance matrix, A, for known calibration samples. We then solve for the matrix, K. Each column of K will each hold the spectrum of one of the pure components. Since the data in C and A contain noise, there will, in general, be no exact solution for equation [29]. So, we must find the best least-squares solution for equation [29]. In other words, we want to find K such that the sum of the squares of the errors is minimized. The errors are the difference between the measured spectra, A, and the spectra calculated by multiplying K and C ... [Pg.51]

Since the data in C and A contain noise, there will, in general, be no exact solution for equation [46], so, we must find the best least-squares solution Jin other words, we want to find P such that the sum of the squares of the errors is... [Pg.71]

The least squares solution of MLR can be formally defined in terms of matrix products (Section 10.2) ... [Pg.53]

G. H. Golub and C. Reinsch, Singular value decomposition and least squares solutions. Numer. [Pg.159]

For each studied system, the range of distances at which pair-forces are modeled as cubic splines is given, as well as mesh sizes and the number of resulting unknowns, the number of configurations included in a set to generate an over-determined system of equations, and the number of sets for which the least squared solution is averaged. [Pg.207]

In the previous chapter we presented the problem of fitting data when there is more information (in the form of equations relating the several variables involved) available than the minimum amount that will allow for the solution of the equations. We then presented the matrix equations for calculating the least squares solution to this case of overdetermined variables. How did we get from one to the other ... [Pg.33]

It is certainly true that for any arbitrarily chosen equation, we can calculate what the point described by that equation is, that corresponds to any given data point. Having done that for each of the data points, we can easily calculate the error for each data point, square these errors, and add together all these squares. Clearly, the sum of squares of the errors we obtain by this procedure will depend upon the equation we use, and some equations will provide smaller sums of squares than other equations. It is not necessarily intuitively obvious that there is one and only one equation that will provide the smallest possible sum of squares of these errors under these conditions however, it has been proven mathematically to be so. This proof is very abstruse and difficult. In fact, it is easier to find the equation that provides this least square solution than it is to prove that the solution is unique. A reasonably accessible demonstration, expressed in both algebraic and matrix terms, of how to find the least square solution is available. [Pg.34]

Equation 69-10, of course, is the same as equation 69-1, and therefore we see that this procedure gives us the least-squares solution to the problem of determining the regression coefficients, and equation 69-1 is, as we said, the matrix equation for the least-squares solution. [Pg.473]

Where, in this whole derivation, did the question of least squares even come up, much less show that equation 69-10 represents the least-squares solution All we did was a formalistic manipulation of a matrix equation, in order to allow us to create some necessary intermediate matrices, and in a form that would permit further computations, specifically, a matrix inversion. [Pg.473]

In fact, it is true that equation 69-10 represents the least-squares solution to the problem of finding the coefficients of equation 69-3, it is just not obvious from this derivation, based on matrix mathematics. To demonstrate that equation 69-10 is, in fact, a least-squares solution, we have to go back to the initial problem and apply the methods of calculus to the problem. This derivation has been done in great detail [7], and in somewhat lesser detail in a spectroscopic context [8],... [Pg.473]

Equation 69-15 is the same as equation 69-8. Thus we have demonstrated that the equations generated from calculus, where we explicitly inserted the least square condition, create the same matrix equations that result from the formalistic matrix manipulations of the purely matrix-based approach. Since the least-squares principal is introduced before equation 69-8, this procedure therefore demonstrates that the rest of the derivation, leading to equation 69-10, does in fact provide us with the least squares solution to the original problem. [Pg.475]

The matrix IzmxN containing the model parameters can then be directly estimated using the Gauss-Markov theorem to find the least squares solution of generic linear problems written in matrix form [37] ... [Pg.159]

Then the least squares solution is that which minimizes the sum of the squares of the residual J = eTe. The equation in x,... [Pg.30]

The projector P associated with that projection is A(ATA) 1AT, with dimension mxm and rank n. Comparing with equation (5.1.5), we obtain the least-square solution as... [Pg.250]

The least-square solution itself does not depend on the probability distribution of y it is simply a minimum-distance estimate. Later in this Chapter, it will be shown, however, that its sampling properties are most easily described when the measurements are normally distributed. [Pg.250]

The least-square solution /Ce, /Nd, and ISm is calculated from equation (5.4.23) as... [Pg.293]

Finding the least-square solution reduces to minimizing the sum S of squared deviations yk — j) between the estimated source solution and its projection onto each sample subspace. Thus, finding the minimum of... [Pg.485]

In the present case, the system has more equations (11) than unknowns (7) and may be conveniently solved for x = Al3...Ah2. Png, and Pn by the least-square solution alluded to above. The system is, in general, ill-conditioned and extended precision should be used for the inversion. [Pg.145]

Finally, a brief discussion is given of a new type of control algorithm called dynamic matrix control. This is a time-domain method that uses a model of the process to calculate future changes in the manipulated variable such that an objective function is minimized. It is basically a least-squares solution. [Pg.253]

Figure 5.9 contains the same two replicate experiments shown in Figure 5.8, but here the response surface for the model y, = P jc, + r, is shown. The least squares solution is obtained as follows. [Pg.90]

In the past few years, PLS, a multiblock, multivariate regression model solved by partial least squares found its application in various fields of chemistry (1-7). This method can be viewed as an extension and generalization of other commonly used multivariate statistical techniques, like regression solved by least squares and principal component analysis. PLS has several advantages over the ordinary least squares solution therefore, it becomes more and more popular in solving regression models in chemical problems. [Pg.271]

The PLS technique gives a stepwise solution for the regression model, which converges to the least squares solution. The final model is the sum of a series of submodels. It can handle multiple response variables, highly correlated predictor variables grouped into several blocks and underdetermined systems, where the number of samples is less than the number of predictor variables. Our model (not including the error terms) is ... [Pg.272]

The optimal number of components from the prediction point of view can be determined by cross-validation (10). This method compares the predictive power of several models and chooses the optimal one. In our case, the models differ in the number of components. The predictive power is calculated by a leave-one-out technique, so that each sample gets predicted once from a model in the calculation of which it did not participate. This technique can also be used to determine the number of underlying factors in the predictor matrix, although if the factors are highly correlated, their number will be underestimated. In contrast to the least squares solution, PLS can estimate the regression coefficients also for underdetermined systems. In this case, it introduces some bias in trade for the (infinite) variance of the least squares solution. [Pg.275]

The linear programming and weighted least squares solutions deal with the case of a number of constraints, n, greater than the number of unknowns, p. Henry (6) has applied a linear programming algorithm (7). Results are comparable to those obtained from ordinary weighted least squares. This solution has not been developed further. [Pg.92]


See other pages where Least-squares solution is mentioned: [Pg.332]    [Pg.2967]    [Pg.504]    [Pg.405]    [Pg.344]    [Pg.249]    [Pg.252]    [Pg.256]    [Pg.263]    [Pg.289]    [Pg.290]    [Pg.290]    [Pg.291]    [Pg.310]    [Pg.480]    [Pg.495]    [Pg.117]    [Pg.118]    [Pg.121]    [Pg.59]    [Pg.81]    [Pg.102]    [Pg.324]    [Pg.273]    [Pg.50]   
See also in sourсe #XX -- [ Pg.467 , Pg.470 , Pg.473 , Pg.508 ]




SEARCH



© 2024 chempedia.info