Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sum of least squares

To extract the agglomeration kernels from PSD data, the inverse problem mentioned above has to be solved. The population balance is therefore solved for different values of the agglomeration kernel, the results are compared with the experimental distributions and the sums of non-linear least squares are calculated. The calculated distribution with the minimum sum of least squares fits the experimental distribution best. [Pg.185]

The process of curve fitting utilizes the sum of least squares (denoted SSq) as the means of assessing goodness of fit of data points to the model. Specifically, SSq is the sum of the differences between the real data values (yd) and the value calculated by the model (yc) squared to cancel the effects of arithmetic sign ... [Pg.233]

In the work that follows, the experimental data were fitted by minimizing the sum of least squares and the differential equations were integrated numerically. [Pg.363]

W - peak width (two theta), P - peak position (two theta), S - sum of least squares,... [Pg.170]

The system (3.78) can be considered as the normal system that minimizes the sum of least squares of (3.81). In fact, it can be obtained via the same procedure adopted with the matrix F. Therefore, it can be solved by QR factorizing the overdimensioned system (3.81). This has the advantage of resulting in better conditioning, as in the case of linear regressions, but the overdimensioned system (3.81) also benefits from it since it can be QR factorized with a number of flops in the order of thanks to its special structure. [Pg.112]

The values of the adsorption parameters used in this study were obtained as a result of the minimization of the sum of least squares deviation of calculated and experimental concentration profiles in a batch experiment [20]. The experimental concentration curve is given along with the calculated profiles (based on either the first-order or Langmuir kinetics) in Eigure 15.2. [Pg.368]

In Fig. 5.5.3, the sensitive branches TA(A) and TA(A) are plotted entirely with different cutoffs n results corresponding to even and odd are plotted separately. At first sight all "partial sums" with even n have to be eliminated the dispersions with odd ti shown in Fig. 5.5.3 then allow us to choose between n=3,5, and 7, values which the Fig. 5.5.2a left us with. - Judged merely by a sum of least squares, the n=3 and n=5 curves in Fig. 5.5.3 match experiment to about the same degree. Neverthelesss the curve with n=3 misses an important physical feature the characteristic flatness of the TA branch. The value n=5 is thus the smallest n reproducing this flatness adequately. Including an even more distant force (k. y) appears desirable in order to remove a small. [Pg.257]

For practical purposes, convergence of the fitting algorithm at the level of I part in 1(F is probably satisfactory, but it can prove difficult to decide when an optimum fit has been achieved. Generally, the technique is to vary the constrained parameters of the fit while monitoring the weighted sum of least squares. given by... [Pg.176]

The standard least-squares approach provides an alternative to the Galerkin method in the development of finite element solution schemes for differential equations. However, it can also be shown to belong to the class of weighted residual techniques (Zienkiewicz and Morgan, 1983). In the least-squares finite element method the sum of the squares of the residuals, generated via the substitution of the unknown functions by finite element approximations, is formed and subsequently minimized to obtain the working equations of the scheme. The procedure can be illustrated by the following example, consider... [Pg.64]

This method, because it involves minimizing the sum of squares of the deviations xi — p, is called the method of least squares. We have encountered the principle before in our discussion of the most probable velocity of an individual particle (atom or molecule), given a Gaussian distr ibution of particle velocities. It is ver y powerful, and we shall use it in a number of different settings to obtain the best approximation to a data set of scalars (arithmetic mean), the best approximation to a straight line, and the best approximation to parabolic and higher-order data sets of two or more dimensions. [Pg.61]

If the experimental error is random, the method of least squares applies to analysis of the set. Minimize the sum of squares of the deviations by differentiating with respect to m. [Pg.62]

Once the form of the correlation is selected, the values of the constants in the equation must be determined so that the differences between calculated and observed values are within the range of assumed experimental error for the original data. However, when there is some scatter in a plot of the data, the best line that can be drawn representing the data must be determined. If it is assumed that all experimental errors (s) are in thejy values and the X values are known exacdy, the least-squares technique may be appHed. In this method the constants of the best line are those that minimise the sum of the squares of the residuals, ie, the difference, a, between the observed values,jy, and the calculated values, Y. In general, this sum of the squares of the residuals, R, is represented by... [Pg.244]

The purpose of the principle of least squares is to minimize the sum of the squares of the errors so that... [Pg.106]

Nonlinear regression, a technique that fits a specified function of x and y by the method of least squares (i.e., the sum of the squares of the differences between real data points and calculated data points is minimized). [Pg.280]

To produce a calibration using classical least-squares, we start with a training set consisting of a concentration matrix, C, and an absorbance matrix, A, for known calibration samples. We then solve for the matrix, K. Each column of K will each hold the spectrum of one of the pure components. Since the data in C and A contain noise, there will, in general, be no exact solution for equation [29]. So, we must find the best least-squares solution for equation [29]. In other words, we want to find K such that the sum of the squares of the errors is minimized. The errors are the difference between the measured spectra, A, and the spectra calculated by multiplying K and C ... [Pg.51]

Since the data in C and A contain noise, there will, in general, be no exact solution for equation [46], so, we must find the best least-squares solution Jin other words, we want to find P such that the sum of the squares of the errors is... [Pg.71]

The unweighted least squares analysis is based on the assumption that the best value of the rate constant k is the one t,hat minimizes the sum of the squares of the residuals. In the general case one should regard the zero time point as an adjustable constant in order to avoid undue weighting of the initial point. An analysis of this type gives the following expressions for first-and second-order rate constants... [Pg.55]

The procedure of Lifson and Warshel leads to so-called consistent force fields (OFF) and operates as follows First a set of reliable experimental data, as many as possible (or feasible), is collected from a large set of molecules which belong to a family of molecules of interest. These data comprise, for instance, vibrational properties (Section 3.3.), structural quantities, thermochemical measurements, and crystal properties (heats of sublimation, lattice constants, lattice vibrations). We restrict our discussion to the first three kinds of experimental observation. All data used for the optimisation process are calculated and the differences between observed and calculated quantities evaluated. Subsequently the sum of the squares of these differences is minimised in an iterative process under variation of the potential constants. The ultimately resulting values for the potential constants are the best possible within the data set and analytical form of the chosen force field. Starting values of the potential constants for the least-squares process can be derived from the same sources as mentioned in connection with trial-and-error procedures. [Pg.174]

The basis upon which this concept rests is the very fact that not all the data follows the same equation. Another way to express this is to note that an equation describes a line (or more generally, a plane or hyperplane if more than two dimensions are involved. In fact, anywhere in this discussion, when we talk about a calibration line, you should mentally add the phrase ... or plane, or hyperplane... ). Thus any point that fits the equation will fall exactly on the line. On the other hand, since the data points themselves do not fall on the line (recall that, by definition, the line is generated by applying some sort of [at this point undefined] averaging process), any given data point will not fall on the line described by the equation. The difference between these two points, the one on the line described by the equation and the one described by the data, is the error in the estimate of that data point by the equation. For each of the data points there is a corresponding point described by the equation, and therefore a corresponding error. The least square principle states that the sum of the squares of all these errors should have a minimum value and as we stated above, this will also provide the maximum likelihood equation. [Pg.34]

One part of that equation, [AtA] , appears so commonly in chemometric equations that it has been given a special name, it is called the pseudoinverse of the matrix A. The uninverted term ATA is itself fairly commonly found, as well. The pseudoinverse appears as a common component of chemometric equations because it confers the Least Squares property on the results of the computations that is, for whatever is being modeled, the computations defined by equation 69-1 produce a set of coefficients that give the smallest possible sum of the squares of the errors, compared to any other possible linear model. [Pg.472]

To compensate for the errors involved in experimental data, the number of data sets should be greater than the number of coefficients p in the model. Least squares is just the application of optimization to obtain the best solution of the equations, meaning that the sum of the squares of the errors between the predicted and the experimental values of the dependent variable y for each data point x is minimized. Consider a general algebraic model that is linear in the coefficients. [Pg.55]

Then the least squares solution is that which minimizes the sum of the squares of the residual J = eTe. The equation in x,... [Pg.30]

Considerable effort has gone into solving the difficult problem of deconvolution and curve fitting to a theoretical decay that is often a sum of exponentials. Many methods have been examined (O Connor et al., 1979) methods of least squares, moments, Fourier transforms, Laplace transforms, phase-plane plot, modulating functions, and more recently maximum entropy. The most widely used method is based on nonlinear least squares. The basic principle of this method is to minimize a quantity that expresses the mismatch between data and fitted function. This quantity /2 is defined as the weighted sum of the squares of the deviations of the experimental response R(ti) from the calculated ones Rc(ti) ... [Pg.181]


See other pages where Sum of least squares is mentioned: [Pg.179]    [Pg.237]    [Pg.298]    [Pg.136]    [Pg.285]    [Pg.756]    [Pg.371]    [Pg.179]    [Pg.169]    [Pg.249]    [Pg.179]    [Pg.237]    [Pg.298]    [Pg.136]    [Pg.285]    [Pg.756]    [Pg.371]    [Pg.179]    [Pg.169]    [Pg.249]    [Pg.2109]    [Pg.19]    [Pg.65]    [Pg.234]    [Pg.92]    [Pg.183]    [Pg.37]    [Pg.39]    [Pg.78]    [Pg.306]    [Pg.441]    [Pg.214]    [Pg.18]    [Pg.145]    [Pg.161]   
See also in sourсe #XX -- [ Pg.112 ]




SEARCH



Of sums

Sum of squares

© 2024 chempedia.info