Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random summing calibration

Set 2 background spectrum, calibration spectrum and two standard spectra, A and B, measured on the same detector system as Set 1 spectra but the sources were placed on the cap of the detector to enhance the true coincidence summing and random summing problems. [Pg.307]

Van der Voet [21] advocates the use of a randomization test (cf. Section 12.3) to choose among different models. Under the hypothesis of equivalent prediction performance of two models, A and B, the errors obtained with these two models come from one and the same distribution. It is then allowed to exchange the observed errors, and c,b, for the ith sample that are associated with the two models. In the randomization test this is actually done in half of the cases. For each object i the two residuals are swapped or not, each with a probability 0.5. Thus, for all objects in the calibration set about half will retain the original residuals, for the other half they are exchanged. One now computes the error sum of squares for each of the two sets of residuals, and from that the ratio F = SSE/JSSE. Repeating the process some 100-2(K) times yields a distribution of such F-ratios, which serves as a reference distribution for the actually observed F-ratio. When for instance the observed ratio lies in the extreme higher tail of the simulated distribution one may... [Pg.370]

Figure 65-1 shows a schematic representation of the F-test for linearity. Note that there are some similarities to the Durbin-Watson test. The key difference between this test and the Durbin-Watson test is that in order to use the F-test as a test for (non) linearity, you must have measured many repeat samples at each value of the analyte. The variabilities of the readings for each sample are pooled, providing an estimate of the within-sample variance. This is indicated by the label Operative difference for denominator . By Analysis of Variance, we know that the total variation of residuals around the calibration line is the sum of the within-sample variance (52within) plus the variance of the means around the calibration line. Now, if the residuals are truly random, unbiased, and in particular the model is linear, then we know that the means for each sample will cluster... [Pg.435]

Cosmic rays hit the CCD array at random times with arbitrary intensity, resulting in spikes at individual pixels. When the array is summed and processed, sharp spectral features of arbitrary intensities may appear in the Raman spectra. These artifacts are typically removed before multivariate calibration. [Pg.401]

The underlying assumption in statistical analysis is that the experimental error is not merely repeated in each measurement, otherwise there would be no gain in multiple observations. For example, when the pure chemical we use as a standard is contaminated (say, with water of crystallization), so that its purity is less than 100%, no amount of chemical calibration with that standard will show the existence of such a bias, even though all conclusions drawn from the measurements will contain consequent, determinate or systematic errors. Systematic errors act uni-directionally, so that their effects do not average out no matter how many repeat measurements are made. Statistics does not deal with systematic errors, but only with their counterparts, indeterminate or random errors. This important limitation of what statistics does, and what it does not, is often overlooked, but should be kept in mind. Unfortunately, the sum-total of all systematic errors is often larger than that of the random ones, in which case statistical error estimates can be very misleading if misinterpreted in terms of the presumed reliability of the answer. The insurance companies know it well, and use exclusion clauses for, say, preexisting illnesses, for war, or for unspecified acts of God , all of which act uni-directionally to increase the covered risk. [Pg.39]

Summing of events, either a result of coincident emission of gamma rays in the decay chain of the nuclide of interest or of random coincident emissions, can lead to significant losses or potential additions to an otherwise clean peak (De Bruin and Blaauw 1992 Becker et al. 1994). While coincidence losses are not an issue for comparator NAA, calibration, and/or computational correction must be applied (Debertin and Helmer 1988 Blaauw and Celsema 1999) to arrive at true peak areas for other methods of calibration. [Pg.1603]

If at the points, v of the calibration range, only single measurements y are available, the quality of the calibration model can be evaluated only as a whole (sum of systematic model errors and random experimental errors) usually by testing the goodness of fit and by testing the coeffieient of determination. [Pg.117]

The sum of the squared residues increases with the number n of calibration points. The square root of the sum of the squared residues divided by n - 2 is called the residual standard deviation RSD, which represents a mean value for the residuals. Using ordinary linear regression, the RSD nestles at both sides against the straight line as a constant band, according to a random scatter of the residues. [Pg.114]


See other pages where Random summing calibration is mentioned: [Pg.141]    [Pg.91]    [Pg.158]    [Pg.159]    [Pg.231]    [Pg.306]    [Pg.155]    [Pg.246]    [Pg.406]    [Pg.131]    [Pg.80]    [Pg.256]    [Pg.155]    [Pg.268]    [Pg.33]    [Pg.111]    [Pg.178]    [Pg.427]    [Pg.137]    [Pg.138]    [Pg.45]    [Pg.281]   


SEARCH



Random sum

© 2024 chempedia.info