Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gaussian least-squares method

Given the matching sets of measured data, x and y, it is now possible to estimate the model regression coefficients b. Assuming that the model errors (values in f) are Gaussian-distributed, it can be proven that the value of b that minimizes the sum of squares of the model errors is determined using the least squares method ... [Pg.360]

Only in the simplest cases—a single Gaussian component, for example— may conventional linear least-squares method be employed to solve for u. More commonly, either approximate linearized methods or nonlinear methods are employed. [Pg.32]

Figure 19.10 shows the CH resonance lines of the crystalline components of PVA fibers spun from the 10 wt% DMSO solution, which were selectively measured by the CPTl pulse sequence. This figure also shows the results of the lineshape analysis by the computer-aided least-squares method. In these fibers, it is necessary to introduce two Gaussians, lines lilt, and for line... [Pg.723]

Instead, we mean here the use of experimental data that can be expected to lie on a smooth curve but fail to do so as the result of measurement uncertainties. Whenever the data are equidistant (i.e., taken at constant increments of the independent variable) and the errors are random and follow a single Gaussian distribution, the least-squares method is appropriate, convenient, and readily implemented on a spreadsheet. In section 3.3 we already encountered this procedure, which is based on least-squares fitting of the data to a polynomial, and uses so-called convoluting integers. This method is, in fact, quite old, and goes back to work by Sheppard (Proc. 5th... [Pg.318]

Least-squares curve-fitting enables determination of accurate mass by finding the maximum of a simulated curve that best describes an observation. The Gaussian function is one of the most frequently used functions to describe features in mass spectra. Fitting peaks is done with software tools which rely on the least-squares method. Other functions can also be used to fit spectral features, such as Lorentzian and polynomial. Notably,... [Pg.234]

Genetic function algorithms consider not only the presence or absence of a particular property in a regression equation, but also the form of the relationship between the property and biological activity. Linear, spline, Gaussian, and polynomial functions are frequently used. 7 For 3D-QSAR, the genetic partial least-squares method replaces the typical multiple regression calculation of the fitness with a PLS calculation. ... [Pg.193]

The Role of Computers in Activation Analysis.— For many years spectral data accumulated by m.c.a. have been analysed by digital computers. Many complex schemes of spectrum analysis have been developed involving least-squares methods, iterative Gaussian fits to peaks, correlation of the subject spectrum with ideal or measured spectrum shapes, convolution methods, and many combinations and variations of these schemes. Quittner... [Pg.107]

Variance [var(c)] represent the parameter variances. S is sometimes called the variance-covariance matrix. Variance is a measure of the spread of expected values of random variables belonging to a specific probability distribution. As has been mentioned previously, the validity of the least-squares method for determining regression parameters is based on errors in the data having a normal (Gaussian) distribution (the familiar bell-shaped curve) with zero mean and constant variance. The values of the parameters determined from data with such normal errors are, in a... [Pg.146]

This method, because it involves minimizing the sum of squares of the deviations xi — p, is called the method of least squares. We have encountered the principle before in our discussion of the most probable velocity of an individual particle (atom or molecule), given a Gaussian distr ibution of particle velocities. It is ver y powerful, and we shall use it in a number of different settings to obtain the best approximation to a data set of scalars (arithmetic mean), the best approximation to a straight line, and the best approximation to parabolic and higher-order data sets of two or more dimensions. [Pg.61]

In a well-behaved calibration model, residuals will have a Normal (i.e., Gaussian) distribution. In fact, as we have previously discussed, least-squares regression analysis is also a Maximum Likelihood method, but only when the errors are Normally distributed. If the data does not follow the straight line model, then there will be an excessive number of residuals with too-large values, and the residuals will then not follow the Normal distribution. It follows, then, that a test for Normality of residuals will also detect nonlinearity. [Pg.437]

As was shown, the conventional method for data reconciliation is that of weighted least squares, in which the adjustments to the data are weighted by the inverse of the measurement noise covariance matrix so that the model constraints are satisfied. The main assumption of the conventional approach is that the errors follow a normal Gaussian distribution. When this assumption is satisfied, conventional approaches provide unbiased estimates of the plant states. The presence of gross errors violates the assumptions in the conventional approach and makes the results invalid. [Pg.218]

An interesting method of fitting was presented with the introduction, some years ago, of the model 310 curve resolver by E. I. du Pont de Nemours and Company. With this equipment, the operator chose between superpositions of Gaussian and Cauchy functions electronically generated and visually superimposed on the data record. The operator had freedom to adjust the component parameters and seek a visual best match to the data. The curve resolver provided an excellent graphic demonstration of the ambiguities that can result when any method is employed to resolve curves, whether the fit is visually based or firmly rooted in rigorous least squares. The operator of the model 310 soon discovered that, when data comprise two closely spaced peaks, acceptable fits can be obtained with more than one choice of parameters. The closer the blended peaks, the wider was the choice of parameters. The part played by noise also became rapidly apparent. The noisy data trace allowed the operator additional freedom of choice, when he considered the error bar that is implicit at each data point. [Pg.33]


See other pages where Gaussian least-squares method is mentioned: [Pg.307]    [Pg.343]    [Pg.948]    [Pg.561]    [Pg.307]    [Pg.343]    [Pg.948]    [Pg.561]    [Pg.693]    [Pg.260]    [Pg.115]    [Pg.234]    [Pg.510]    [Pg.66]    [Pg.101]    [Pg.209]    [Pg.115]    [Pg.533]    [Pg.476]    [Pg.3]    [Pg.150]    [Pg.148]    [Pg.43]    [Pg.100]    [Pg.1595]    [Pg.404]    [Pg.145]    [Pg.304]    [Pg.1199]    [Pg.318]    [Pg.318]    [Pg.91]    [Pg.2975]    [Pg.58]    [Pg.429]    [Pg.32]    [Pg.420]    [Pg.164]    [Pg.137]    [Pg.160]    [Pg.5]   
See also in sourсe #XX -- [ Pg.290 , Pg.292 , Pg.293 ]




SEARCH



Gaussian methods

Gaussian methods method

Least-squared method

Least-squares method

© 2024 chempedia.info