Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sample Least-Squares Calculation

Reject any obviously bad points (see Rejection of Discordant Data). [Pg.681]

Decide on the model function to be used. This choice may be guided by a theoretical prediction or by a rough empirical assessment of the type of dependence likely. [Pg.681]

Choose a reasonable set of trial values for the adjustable parameters of the model if nonlinear fitting is to be done. A good choice of a° is very important to reduce computational time and to avoid false minima (see Goodness of Fit, toward the end of the discussion of tests). [Pg.681]

Carry out the least-squares minimization of the quantity in Eq. (7) according to an appropriate algorithm (presumably normal equations if the observational equations are linear in the parameters to be determined otherwise some other such as Marquardf s ). The linear regression and Solver operations in spreadsheets are especially useful (see Chapter HI). Convergence should not be assumed in the nonlinear case until successive cycles produce no significant change in any of the parameters. [Pg.681]

After convergence has been achieved, check the residuals y y to see if any data points should be deleted from the data set (see end of the section on Goodness of Fit). If so, rerun the least-squares fit on the new, edited data set. [Pg.681]


TABLE 2 Sample least-squares calculation data and least-squares fits with two and three parameters... [Pg.682]

Further on the 95% confidence interval width of the calibration curve may be calculated. If it is calculated according to Mandel (1964) it gives not only an impression of the imprecision of the standards, but also of the samples. Herber et al. (1983) described the least squares calculation, homogeneity of variances, linearity and confidence interval as applied in atomic absorption spectrometry. [Pg.262]

The amorphous polyethylene heat capacity was evaluated between 410° K and 450° K making use of data from samples 2 to 5, 11 to 13, 17, 19, 20, and 29. The spread of data at any one temperature was 5 to 8%. The averages at each temperature were used for a least squares calculation. The equation ... [Pg.297]

To produce a calibration using classical least-squares, we start with a training set consisting of a concentration matrix, C, and an absorbance matrix, A, for known calibration samples. We then solve for the matrix, K. Each column of K will each hold the spectrum of one of the pure components. Since the data in C and A contain noise, there will, in general, be no exact solution for equation [29]. So, we must find the best least-squares solution for equation [29]. In other words, we want to find K such that the sum of the squares of the errors is minimized. The errors are the difference between the measured spectra, A, and the spectra calculated by multiplying K and C ... [Pg.51]

The amount of NIPA is determined based upon external standard calibration. A non-weighted linear least-squares estimate of the calibration curve is used to calculate the amount of NIPA in the unknowns. The response of any given sample must not exceed the response of the most concentrated standard. If this occurs, dilution of the sample will be necessary. [Pg.367]

Concentrations of terbacil and its Metabolites A, B and C are calculated from a calibration curve for each analyte run concurrently with each sample set. The equation of the line based on the peak height of the standard versus nanograms injected is generated by least-squares linear regression analysis performed using Microsoft Excel. [Pg.582]

Using the valence profiles of the 10 measured directions per sample it is now possible to reconstruct as a first step the Ml three-dimensional momentum space density. According to the Fourier Bessel method [8] one starts with the calculation of the Fourier transform of the Compton profiles which is the reciprocal form factor B(z) in the direction of the scattering vector q. The Ml B(r) function is then expanded in terms of cubic lattice harmonics up to the 12th order, which is to take into account the first 6 terms in the series expansion. These expansion coefficients can be determined by a least square fit to the 10 experimental B(z) curves. Then the inverse Fourier transform of the expanded B(r) function corresponds to a series expansion of the momentum density, whose coefficients can be calculated from the coefficients of the B(r) expansion. [Pg.317]

To summarize the procedure used from calculus (again, refer to either of the indicated references for the details), the errors are first calculated as the difference between the computed values (from equations 69-11) and the (unknown) true value for each individual sample these errors are then squared and summed. This is all done in terms of algebraic expressions derived from equation 69-11. The least square nature of the desired solution is then defined as the smallest sum of squares of the error values, and is then clearly seen to be the minimum possible value of this sum of squares, that could potentially be obtained from any possible set of values for the set of computed values of bt. [Pg.474]

The optimal number of components from the prediction point of view can be determined by cross-validation (10). This method compares the predictive power of several models and chooses the optimal one. In our case, the models differ in the number of components. The predictive power is calculated by a leave-one-out technique, so that each sample gets predicted once from a model in the calculation of which it did not participate. This technique can also be used to determine the number of underlying factors in the predictor matrix, although if the factors are highly correlated, their number will be underestimated. In contrast to the least squares solution, PLS can estimate the regression coefficients also for underdetermined systems. In this case, it introduces some bias in trade for the (infinite) variance of the least squares solution. [Pg.275]

In addition to spectra of the reference minerals listed in Table II, the least-squares components in each iteration included 3 "spectra" representing 1) moisture in KBr blank (obtained by subtraction of 2 KBr blank spectra), 2) a constant baseline offset (1 abs from 4000 to 400 cm" ), and 3) a sloping linear baseline (line from 1 abs at 4000 cm" to 0 abs at 400 cm" ). The final mineral component concentrations were normalized to 100%, disregarding the contributions of the three artificial components. The normalized least-squares results for each sample were combined with the ash elemental composition of each reference mineral to calculate the elemental composition of the ASTM oxidized ash corresponding to each LTA. This was done by multiplying the concentration of each reference mineral in a sample by the concentration of each elemental oxide in the reference mineral, then summing over each oxide. [Pg.47]

The procedure starts with dividing the sample into n sub-samples. We spike n-1 sub-samples with the analyte in equidistant steps and measure all n sub-samples. We use least-square regression to calculate the regression line and extrapolate to the intersection, , "... [Pg.199]

Selected entries from Methods in Enzymology [vol, page(s)] Association constant determination, 259, 444-445 buoyant mass determination, 259, 432-433, 438, 441, 443, 444 cell handling, 259, 436-437 centerpiece selection, 259, 433-434, 436 centrifuge operation, 259, 437-438 concentration distribution, 259, 431 equilibration time, estimation, 259, 438-439 molecular weight calculation, 259, 431-432, 444 nonlinear least-squares analysis of primary data, 259, 449-451 oligomerization state of proteins [determination, 259, 439-441, 443 heterogeneous association, 259, 447-448 reversibility of association, 259, 445-447] optical systems, 259, 434-435 protein denaturants, 259, 439-440 retroviral protease, analysis, 241, 123-124 sample preparation, 259, 435-436 second virial coefficient [determination, 259, 443, 448-449 nonideality contribution, 259, 448-449] sensitivity, 259, 427 stoichiometry of reaction, determination, 259, 444-445 terms and symbols, 259, 429-431 thermodynamic parameter determination, 259, 427, 443-444, 449-451. [Pg.632]

Chlorella sorokiniana var. pacificensis were treated with 180 ppm O3 for 50 min in autotrophic media. Lipids were extracted by using Chloroform/methanol and prepared for gas-liquid chromatography (GLC) as described by Frederick and Heath (24). The average % concentration of fatty acids were calculated from 3 GLC runs in 5 separate samples. The O3/O2 column refers to ratios of average % concentration and represents standard deviation. Confidence Level was calculated by least squares analysis. [Pg.73]


See other pages where Sample Least-Squares Calculation is mentioned: [Pg.681]    [Pg.681]    [Pg.683]    [Pg.681]    [Pg.681]    [Pg.683]    [Pg.276]    [Pg.287]    [Pg.220]    [Pg.33]    [Pg.472]    [Pg.453]    [Pg.444]    [Pg.263]    [Pg.598]    [Pg.163]    [Pg.164]    [Pg.96]    [Pg.32]    [Pg.164]    [Pg.257]    [Pg.483]    [Pg.61]    [Pg.161]    [Pg.640]    [Pg.83]    [Pg.119]    [Pg.122]    [Pg.383]    [Pg.216]    [Pg.229]    [Pg.273]    [Pg.273]    [Pg.132]    [Pg.36]    [Pg.111]    [Pg.386]    [Pg.387]    [Pg.228]    [Pg.185]   


SEARCH



Sample calculation

© 2024 chempedia.info