Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Minimum sums of squares

The total residual sum of squares, taken over all elements of E, achieves its minimum when each column Cj separately has minimum sum of squares. The latter occurs if each (univariate) column of Y is fitted by X in the least-squares way. Consequently, the least-squares minimization of E is obtained if each separate dependent variable is fitted by multiple regression on X. In other words the multivariate regression analysis is essentially identical to a set of univariate regressions. Thus, from a methodological point of view nothing new is added and we may refer to Chapter 10 for a more thorough discussion of theory and application of multiple regression. [Pg.323]

This is the general matrix solution for the set of parameter estimates that gives the minimum sum of squares of residuals. Again, the solution is valid for all models that are linear in the parameters. [Pg.79]

Graph 4 ft, = 4 Graph 5 ft, = 5. Why doesn t the minimum occur at the same value of ftp in all graphs Which graph gives the overall minimum sum of squares of residuals ... [Pg.94]

There is no unique combination of b and that satisfies the condition of least squares all combinations of Z , and iij such that = (0.5 + b2)/0. will produce a minimum sum of squares of residuals. [Pg.364]

The search for the minimum sum of squares produced the following results ... [Pg.33]

To estimate a for the best fitted time series it is necessary to calculate the sum of squared residuals. The best fit is that with a minimum sum of squared residuals. [Pg.212]

Time series without systematic changes (trend or seasonal fluctuations), i.e. with a fixed level, are best approximated by the mean of the series, i.e. a = 0. The mean over the full time range gives a minimum sum of squared differences between the mean and the original series (squared residues). All cases have the same weight, because a is equal to zero. [Pg.212]

The value of X for the minimum sum of squares is not necessarily exactly a multiple of 14. But there will normally be a choice of transformations that are nearly as good, at least not statistically worse. An approximate 95% confidence interval may also be calculated (1, 2), and any transform within the confidence interval may be selected. [Pg.315]

Figure 3.19 shows the results in dimensionless form and the best fit curve obtained by the minimum sum of squares method. This provided the constants for equation 3.56, with the following values (if c is in % v/v) ... Figure 3.19 shows the results in dimensionless form and the best fit curve obtained by the minimum sum of squares method. This provided the constants for equation 3.56, with the following values (if c is in % v/v) ...
For example, equations used to calculate the best-fit slope and y-intercept for a data set that fits the linear function y = mx + b can easily be obtained by considering that the minimum sum-of-squared residuals (SS) corresponds to parameter values for which the partial differential of the function with respect to each parameter equals zero. The squared residuals to be minimized are... [Pg.31]

The test rig described in section 5 was run with the chalk slurry at several solids concentrations ranging up to 20 % by volume, with duplicated measurements at each concentration. Fig.5 shows the results and the best fit curve obtained by the minimum sum of squares method. This provided the constants for equation 11, with the following values (if c is in %) kn = 0.083419, k = 0.22359 and ka = 1.1335. Eqn.ll can then be used as a model for the hydrocyclone performance in particle size measurement of unknown slurries, using the same rig as used in testing the hydrocyclone. [Pg.443]

Subsequently, he uses these parameter values to calculate the sum of squares at each end of the axes and to compare them with the sum of squares at the center of the hyperellipsoid. This sum-of-squares search, which is based on a linear model, may give vital information for nonlinear models as well. In the case where the solution has only converged on a local minimum sum of squares, it is very likely that the search in the direction of one of the axes will produce a lower sum of squares. In such a case, the regression must be repeated, starting from a different initial position, so that the local minimum may be bypassed. [Pg.486]

A simple method, which has been used to arrive at the minimum sum of squares of a nonlinear model, is that of steepest descent. We know that the gradient of a scalar function is a vector that gives the direction of the greatest increase of the function at any point. In the steepest descent method, we take advantage of this property by moving in the opposite direction to reach a lower function value. Therefore, in this method, the initial vector of parameter estimates is corrected in the direction of the negative gradient of O ... [Pg.489]


See other pages where Minimum sums of squares is mentioned: [Pg.441]    [Pg.194]    [Pg.121]    [Pg.160]    [Pg.163]    [Pg.180]    [Pg.180]    [Pg.256]    [Pg.221]    [Pg.96]    [Pg.71]    [Pg.83]    [Pg.85]    [Pg.315]    [Pg.170]    [Pg.258]    [Pg.245]    [Pg.49]    [Pg.215]    [Pg.230]    [Pg.415]    [Pg.120]    [Pg.331]    [Pg.443]    [Pg.230]    [Pg.260]    [Pg.490]    [Pg.574]   
See also in sourсe #XX -- [ Pg.272 ]




SEARCH



Minimum squares

Of sums

Sum of squares

© 2024 chempedia.info