Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-squares minimization

For a specified mean and standard deviation the number of degrees of freedom for a one-dimensional distribution (see sections on the least squares method and least squares minimization) of n data is (n — 1). This is because, given p and a, for n > 1 (say a half-dozen or more points), the first datum can have any value, the second datum can have any value, and so on, up to n — 1. When we come to find the... [Pg.70]

When experimental data is to be fit with a mathematical model, it is necessary to allow for the facd that the data has errors. The engineer is interested in finding the parameters in the model as well as the uncertainty in their determination. In the simplest case, the model is a hn-ear equation with only two parameters, and they are found by a least-squares minimization of the errors in fitting the data. Multiple regression is just hnear least squares applied with more terms. Nonlinear regression allows the parameters of the model to enter in a nonlinear fashion. The following description of maximum likehhood apphes to both linear and nonlinear least squares (Ref. 231). If each measurement point Uj has a measurement error Ayi that is independently random and distributed with a normal distribution about the true model y x) with standard deviation <7, then the probability of a data set is... [Pg.501]

The total residual sum of squares, taken over all elements of E, achieves its minimum when each column Cj separately has minimum sum of squares. The latter occurs if each (univariate) column of Y is fitted by X in the least-squares way. Consequently, the least-squares minimization of E is obtained if each separate dependent variable is fitted by multiple regression on X. In other words the multivariate regression analysis is essentially identical to a set of univariate regressions. Thus, from a methodological point of view nothing new is added and we may refer to Chapter 10 for a more thorough discussion of theory and application of multiple regression. [Pg.323]

The inverse calibration regresses the analytical values (concentrations), x, on the measured values, y. Although with it a prerequisite of the GAussian least squares minimization is violated because the y-values are not error-free, it has been proved that predictions with inverse calibration are more precise than those with the classical calibration (Centner et al. [1998]). This holds true particularly for multivariate inverse calibration. [Pg.186]

In the original, algebraic implementation, this was done by determination of the three unknown quantities through least squares minimization of the MAD equation ... [Pg.122]

When the model used for Fcalc is that obtained by least-squares refinement of the observed structure factors, and the phases of Fca,c are assigned to the observations, the map obtained with Eq. (5.9) is referred to as a residual density map. The residual density is a much-used tool in structure analysis. Its features are a measure for the shortcomings of the least-squares minimization, and the functions which constitute the least-squares model for the scattering density. [Pg.93]

The relation between the least-squares minimization and the residual density follows from the Fourier convolution theorem (Arfken 1970). It states that the Fourier transform of a convolution is the product of the Fourier transforms of the individual functions F(f g) = F(f)F(g). If G(y) is the Fourier transform of 9(x)-... [Pg.93]

The atom-centered multipole expansion used in the density formalisms described in chapter 3 implicitly assigns each density fragment to the nucleus at which it is centered. Since the shape of the density functions is fitted to the observed density in the least-squares minimalization, the partitioning is more flexible than that based on preconceived spherical atoms. [Pg.124]

The methods adjust the atomic net charges q in a least-squares minimization with a discrepancy function equal to the sum of the potential differences over all n sampling points ... [Pg.187]

These two equations present the extension of the Frumkin model to the adsorption of one-surfactant system with two orientational states at the interface. The model equations now contain four free parameters, including cou co2, and b. The equations are highly nonUnear, and regression used in the analysis of surface tension data involves special combinations of Eqs. 23 and 24, which produces a special model fimction used in the least-square minimization with measured surface tension data. Since the model function also contains surface... [Pg.32]

Plate 5 Models of the protein thioredoxin (human, reduced form) as obtained from x-ray crystallography (blue, PDB lert) and NMR (red, PDB 3trx). Only backbone alpha carbons are shown. The models were superimposed by least-squares minimization of the distances between corresponding atoms, using Swiss-PdbViewer. (For discussion, see Chapter 3.) Image SPV/POV-Ray. [Pg.276]

Equations. (9.29)-(9,31) were solved numerically with parameter estimation routine s written for use with an IBM Continuous System Modeling Program (CSMP III)TRelative least-squares minimization was performed. [Pg.185]

Figure 6.2. DynaFit result windows. After answering Yes to questions, Is the initial estimate good enough and Terminate the least-squares minimization , the program terminates by plotting fitted results in the Graphic window and summary in the Text window. Figure 6.2. DynaFit result windows. After answering Yes to questions, Is the initial estimate good enough and Terminate the least-squares minimization , the program terminates by plotting fitted results in the Graphic window and summary in the Text window.
In a real system, the desorption energy (E is dependent on the coverage. Therefore, Gillis-D Hamers has evaluated the surface heterogeneity by the constant coverage, variable heating rate method, developed by Richards and Rees.30 This method estimates Ed as a function of coverage by the least-squares minimization of the experimental data. [Pg.111]

Polydisperse Suspensions. Polydisperse suspensions were approximated by mixtures of monodisperse suspensions. In order not to introduce any bias in the analysis, all the data from three different instruments were analyzed using the same program. The distribution functions were assumed to be a sum of equally spaced histograms for PCS, delta functions for TS and logarithmically spaced histograms for FDPA. The height and position of each parameter was optimized using a nonlinear least squares minimization process (15). [Pg.138]

A comparison of the various fitting techniques is given in Table 5. Most of these techniques depend either explicitly or implicitly on a least-squares minimization. This is appropriate, provided the noise present is normally distributed. In this case, least-squares estimation is equivalent to maximum-likelihood estimation.147 If the noise is not normally distributed, a least-squares estimation is inappropriate. Table 5 includes an indication of how each technique scales with N, the number of data points, for the case in which N is large. A detailed discussion on how different techniques scale with N and also with the number of parameters, is given in the PhD thesis of Vanhamme.148... [Pg.112]

Most kinetic expressions, however, are not linear in the parameters and two approaches can be followed. The first is to rewrite the expression in a linear form and apply the linear least-squares minimization to obtain parameter values. Expression 14 can be reformulated into eq 47. [Pg.315]

In least-squares minimization, where it is assumed that the residuals are small, this Hessian can be approximated by the earlier encountered first derivative matrix multiplication ... [Pg.316]

Now a linear least-squares minimization is used to determine 2. If k takes the value 0.5 then model 1 is preferred, whereas for -0.5 model 2 is the best. The confidence limits should be evaluated to be taken into account since it should not include the value of the other model. [Pg.319]

Since then, refinement methods were considerably improved with the introduction of constrained least-squares minimization, which significantly reduces the number of variables and increases the ratio data/parameters. Even with these methods, water structure more remote from the protein surface tends to be blurred or is featureless, rendering the interpretation of the less well-defined regions in solvent space more ambiguous or impossible. [Pg.460]

Another approach to dealing with the ill-conditioned nature of Laplace inversion is regularization, also known as parsimony. Regularization involves the imposition of additional constraints designed to favor some distributions over others, consistent with the measured data. For example, Tikhonov s regularization 49,65 adds a smoothing constraint to the least squares minimization, so that... [Pg.222]

Carry out the least-squares minimization of the quantity in Eq. (7) according to an appropriate algorithm (presumably normal equations if the observational equations are linear in the parameters to be determined otherwise some other such as Marquardf s ). The linear regression and Solver operations in spreadsheets are especially useful (see Chapter HI). Convergence should not be assumed in the nonlinear case until successive cycles produce no significant change in any of the parameters. [Pg.681]

Figure 17.22. Experimentally determined ehanges in stress at the limit of linearity (cr ) as a funetion of solids volume fraetion (( )) for blends of milkfat (A), eoeoa butter (B) and modified palm oil (C) with eanola oil erystallized for 24 h at 5°C. Symbols represent the average and standard deviations of 2-6 samples. The line through the data was generated by nonlinear least-squares minimization of the model to the experimental data. Indieated are the estimates of the model parameters. The surfaee free energy term (8) was fixed as a eonstant. (Taken from Marangoni and Rogers 2003). Figure 17.22. Experimentally determined ehanges in stress at the limit of linearity (cr ) as a funetion of solids volume fraetion (( )) for blends of milkfat (A), eoeoa butter (B) and modified palm oil (C) with eanola oil erystallized for 24 h at 5°C. Symbols represent the average and standard deviations of 2-6 samples. The line through the data was generated by nonlinear least-squares minimization of the model to the experimental data. Indieated are the estimates of the model parameters. The surfaee free energy term (8) was fixed as a eonstant. (Taken from Marangoni and Rogers 2003).
Subsequently, the baboon a-lactalbumin structure was refined at 1.7-A resolution by Acharya et al. (1989). Using the structure of domestic hen egg white lysozyme as the starting model, preliminary refinement was made using heavily constrained least-squares minimization in reciprocal space. Further refinement was made using stereochemical restraints at 1.7-A resolution to a conventional crystallographic residual of 0.22 for 1141 protein atoms. [Pg.211]


See other pages where Least-squares minimization is mentioned: [Pg.61]    [Pg.221]    [Pg.225]    [Pg.193]    [Pg.43]    [Pg.165]    [Pg.37]    [Pg.37]    [Pg.224]    [Pg.389]    [Pg.62]    [Pg.30]    [Pg.93]    [Pg.36]    [Pg.129]    [Pg.500]    [Pg.225]    [Pg.114]    [Pg.295]    [Pg.315]    [Pg.221]    [Pg.65]    [Pg.515]   
See also in sourсe #XX -- [ Pg.61 ]




SEARCH



Least-Squares Minimization (Regression Analysis)

Least-squares minimization and the residual density

Non-linear least squares minimization

© 2024 chempedia.info