Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Equidistant least squares

Below we show a derivative version, called ELS for equidistant least squares, that uses the same algorithm but with easier-to-make input boxes. It provides two options a fixed or a variable (self-optimizing) order, selected by the macros ELSfixed and ELSoptimized respectively. The main program is the subroutine ELS, which itself calls on several functions and on a subroutine, ConvolutionFactors. The latter need not be a separate subroutine, since it is only called once in the program. However, by placing it in a separate subroutine it becomes available for use in other programs, such as Interpolation, that could benefit from it. [Pg.451]

Again, the above program can readily be modified. For example, a reader who wants to use the equidistant least-squares program to compute a third derivative (for which the convolution integers are not available in the usual tables) will find that this can be done with a few changes around InputBox 4, because the limitation to a second-order derivative is not inherent to the algorithm, but was inserted merely to illustrate how this is done. Similarly, in order to modify the criterion used in the F-test to, say, 1%, one only needs to change the value 0.05 in the line FValueTable (i, j) =... [Pg.461]

This system of i + 2 equations is nonlinear, and for this reason probably has not received attention in the least-squares method (207). We are able to give an explicit solution (163) for the particular case when Xy = xj and m,- = m for all values of i that is, when all reactions of the series are studied at a set of temperatures, not necessarily equidistant, but the same for all reactions. Let us introduce... [Pg.440]

The procedure starts with dividing the sample into n sub-samples. We spike n-1 sub-samples with the analyte in equidistant steps and measure all n sub-samples. We use least-square regression to calculate the regression line and extrapolate to the intersection, , "... [Pg.199]

In this chapter we will describe some of the more sophisticated uses of least squares, especially those for fitting experimental data to specific mathematical functions. First we will describe fitting data to a function of two or more independent parameters, or to a higher-order polynomial such as a quadratic. In section 3.3 we will see how to simplify least-squares analysis when the data are equidistant in the dependent variable (e.g., with data taken at fixed time intervals, or at equal wavelength increments), and how to exploit this for smoothing or differentiation of noisy data sets. In sections 3.4 and 3.5 we will use simple transformations to extend the reach of least-squares analysis to many functions other than polynomials. Finally, in section 3.6, we will encounter so-called non-linear least-squares methods, which can fit data to any computable function. [Pg.90]

Least squares for equidistant data smoothing and differentiation... [Pg.94]

As explained in chapter 8, the least-squares analysis for such an equidistant data set (i.e., with constant x-increments) can be simplified to a set of... [Pg.94]

For so-called equidistant data sets (where equidistance applies to the independent variable), least-squares fitting is even simpler, and takes a form tailor-made for an efficient moving polynomial fit on a spreadsheet, requiring only access to a table of so-called convoluting integers, or software (such as described in section 10.9) where these integers are automatically computed. [Pg.118]

Instead, we mean here the use of experimental data that can be expected to lie on a smooth curve but fail to do so as the result of measurement uncertainties. Whenever the data are equidistant (i.e., taken at constant increments of the independent variable) and the errors are random and follow a single Gaussian distribution, the least-squares method is appropriate, convenient, and readily implemented on a spreadsheet. In section 3.3 we already encountered this procedure, which is based on least-squares fitting of the data to a polynomial, and uses so-called convoluting integers. This method is, in fact, quite old, and goes back to work by Sheppard (Proc. 5th... [Pg.318]

This program for a least-squares fit to equidistant data y,x with a moving polynomial uses the approach pioneered by Sheppard [Proc. [Pg.452]

The dissociation constants were establidied potentiometrically. As on titration in an aqueous medium die base liberated from die salt would precipitate, the pXa was detemined in several alcohol-water mixtures (80/20, 70/30, etc.). The strai t line dirou die measuring (szc) points was obtained by the method of least squares using equidistant X values. Next the pKa value in water was determined by extrapolation. In the same way die experimental error was calculated."... [Pg.185]

The indirect transformation technique models the r -multiplied autocorrelation function r (y(r)) , = r y(r) = p r) as a superposition of equidistant B-splines (up to a cutoff maximum particle size that needs to be given in advance) that are Fourier transformed, subjected to any applicable collimation effects (the method was originally developed for a Kratky slit collimation), and then least-squares fitted to the experimental intensity distribution, so that p r] can be computed using the fit coefficients and the untransformed B-splines. The shape of p(r) is known for many standard particles including solid spheres, core-shell hollow spheres, and rods. [Pg.368]


See other pages where Equidistant least squares is mentioned: [Pg.482]    [Pg.482]    [Pg.88]    [Pg.322]    [Pg.450]    [Pg.200]    [Pg.535]    [Pg.396]    [Pg.472]    [Pg.71]   
See also in sourсe #XX -- [ Pg.450 ]




SEARCH



© 2024 chempedia.info