Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Concentration random errors

The scatter of the points around the calibration line or random errors are of importance since the best-fit line will be used to estimate the concentration of test samples by interpolation. The method used to calculate the random errors in the values for the slope and intercept is now considered. We must first calculate the standard deviation Sy/x, which is given by ... [Pg.209]

The design of a collaborative test must provide the additional information needed to separate the effect of random error from that due to systematic errors introduced by the analysts. One simple approach, which is accepted by the Association of Official Analytical Chemists, is to have each analyst analyze two samples, X and Y, that are similar in both matrix and concentration of analyte. The results obtained by each analyst are plotted as a single point on a two-sample chart, using the result for one sample as the x-coordinate and the value for the other sample as the -coordinate. ... [Pg.688]

Errors in advection may completely overshadow diffusion. The amplification of random errors with each succeeding step causes numerical instability (or distortion). Higher-order differencing techniques are used to avoid this instability, but they may result in sharp gradients, which may cause negative concentrations to appear in the computations. Many of the numerical instability (distortion) problems can be overcome with a second-moment scheme (9) which advects the moments of the distributions instead of the pollutants alone. Six numerical techniques were investigated (10), including the second-moment scheme three were found that limited numerical distortion the second-moment, the cubic spline, and the chapeau function. [Pg.326]

The last elements of realism we will add to the data is random error or noise. In actual data there is noise both in the measurement of the spectra, and in the determination of the concentrations. Accordingly, we will add random error to the data in the absorbance matrices and the concentration matrices. [Pg.46]

X and Y for each method. When the difference between X and Y is calculated (as d) the systematic error drops out so that the difference (d) between X and Y contains no systematic errors, only random errors. We then estimate the precision by using the difference quantities. The difference between the true analyte concentrations of X and Y represents the true analyte difference between X and Y without the systematic error, but... [Pg.188]

A nonlinear relationship in the other component, however, will not show up that way. Let us try to draw a word picture to describe what we are trying to say here (the way we draw, this is by far the easier way) since we could imagine this being plotted in three dimensions, the nonlinear relation will be in the depth dimension, and will be projected on the plane of the predicted-versus-actual plot of the component being calibrated for. In this projection, the nonlinearity will show up as an extra error superimposed on the data, and will be in addition to whatever random error exists in the known values of the composition. Unless the concentrations of the other component are known, there is no way to separate the effects of the nonlinearity from the random error, however. While we cannot actually draw this picture, graphical illustration of these effects have been previously published [8],... [Pg.467]

Fenvalerate Detection Limits. To the extent that detection limits require knowledge of the calibration curve and random error (for x) as a function of concentration, all of the foregoing discussion is relevant — both for detection and estimation. However, curve shape and errors where x x, are relatively unimportant at the detection limit, in contrast to direct observations of the initial slope and the blank and its variability. (It will be seen that the initial observation in the current data set exceeded the ultimate detection limit by more than an order of magnitude )... [Pg.63]

If every calibration point (with the exception of replicates that can be treated by averaging their response values) is treated as a separate knot, two different situations can be distinguished. In case of very precisely defined response values, y., obtained in practice by a high number of replicates in presence of small random errors, it is possible to use interpolating splines. Presumbly, the more frequent case envisaged will be the one, where relatively few data points whose random errors are not negligible and/or that are not highly replicated span the concentration (or mass) domain. [Pg.169]

In practice, the real signal is never ideal. Systematic and random errors often occur due to, e.g. unresolved or badly resolved peaks, non-linearity (resulting in concentration-dependent peak shapes), noise and drift. [Pg.64]

The first problem is deciding on which of these two common models to use. It has been argued that for spectrophotometric methods where the Beer-Lambert Law is known to hold, Y = bX + e, the force through zero model is the correct model to choose if the absorbance values are corrected for the blank." The correct way to carry out the calibration regression is to include the blank response at assumed zero concentration and use the model Y = bX + a + instead. This may be a nicety from a practical standpoint for many assays but there are instances where a force through zero model could produce erroneous results. Note that the e denotes the random error term. Table 15 contains a set of absorbance concentration data from a UV assay. [Pg.49]

The products were diluted with hexane (1 mg product / 200 ml hexane), separated using a DB5-MS column, and analyzed using an HP 5890 GC-MS Series II Plus. Random errors associated with GC-MS concentration measurements were less than 5%, and the reproducibility of conversion measurements was 15% of the reported values. Selectivity is defined as the percentage of biphenyl (the preferred HDS product from dibenzothiophene) divided by the percentage of dibenzothiophene converted times 100. [Pg.419]

This discussion deals with random errors and their propagation in reported HO concentrations. Equal attention should be given, of course, to systematic errors of calibration or instrument drift. [Pg.368]

Calibration of FAGE1 from a static reactor (a Teflon film bag that collapses as sample is withdrawn) has been reported (78). In static decay, HO reacts with a tracer T that has a loss that can be measured by an independent technique T necessarily has no sinks other than HO reaction (see Table I) and no sources within the reactor. From equation 17, the instantaneous HO concentration is calculated from the instantaneous slope of a plot of ln[T] versus time. The presence of other reagents may be necessary to ensure sufficient HO however, the mechanisms by which HO is generated and lost are of no concern, because the loss of the tracer by reaction with whatever HO is present is what is observed. Turbulent transport must keep the reactor s contents well mixed so that the analytically measured HO concentration is representative of the volume-averaged HO concentration reflected by the tracer consumption. If the HO concentration is constant, the random error in [HO] calculated from the tracer decay slope can be obtained from the slope uncertainty of a least squares fit. Systematic error would arise from uncertainties in the rate constant for the T + HO reaction, but several tracers may be employed concurrently. In general, HO may be nonconstant in the reactor, so its concentration variation must be separated from noise associated with the [T] measurement, which must therefore be determined separately. [Pg.374]

Calibration curves were constructed with the NIST albumin (5 concentrations in triplicate) and with the FLUKA albumin (5 concentrations in duplicate) in the concentration range of 50 250 mg/1. The measured values of individual concentrations fluctuated around the fitted lines, with a standard error of 0.007 of the measured absorbance. The difference between FLUKA and NIST albumin calibration lines was statistically insignificant, as evaluated by the t-test P=0.14 > 0.05. The calibration lines differed only in the range of a random error. The FLUKA albumin was, thus, equivalent to that of NIST. Statistical evaluation was carried out using the regression analysis module of the statistical package SPSS, version 4.0. [Pg.223]

Sometimes a measurement involves a single piece of calibrated equipment with a known measurement uncertainty value o, and then confidence limits can be calculated just as with the coin tosses. Usually, however, we do not know o in advance it needs to be determined from the spread in the measurements themselves. For example, suppose we made 1000 measurements of some observable, such as the salt concentration C in a series of bottles labeled 100 mM NaCl. Further, let us assume that the deviations are all due to random errors in the preparation process. The distribution of all of the measurements (a histogram) would then look much like a Gaussian, centered around the ideal value. Figure 4.2 shows a realistic simulated data set. Note that with this many data points, the near-Gaussian nature of the distribution is apparent to the eye. [Pg.69]

Relative standard deviation of repeat HPLC analysis of a drug metabolite standard was between 2 and 5%. Preliminary measurements of several serum samples via solid-phase extraction cleanup followed by HPLC analyses showed that the analyte concentration was between 5 and 15 mg/L and the standard deviation was 2.5 mg/L. The extraction step clearly increased the random error of the overall process. Calculate the number of samples required so that the sample mean would be within +1.2 mg/L of the population mean at the 95% confidence level. [Pg.12]

Control charts are used for monitoring the variability and to provide a graphical display of statistical control. A standard, a reference material of known concentration, is analyzed at specified intervals (e.g., every 50 samples). The result should fall within a specified limit, as these are replicates. The only variation should be from random error. These results are plotted on a control chart to ensure that the random error is not increasing or that a... [Pg.29]


See other pages where Concentration random errors is mentioned: [Pg.770]    [Pg.51]    [Pg.930]    [Pg.21]    [Pg.184]    [Pg.464]    [Pg.466]    [Pg.642]    [Pg.161]    [Pg.162]    [Pg.63]    [Pg.172]    [Pg.30]    [Pg.68]    [Pg.152]    [Pg.475]    [Pg.34]    [Pg.61]    [Pg.215]    [Pg.432]    [Pg.308]    [Pg.61]    [Pg.115]    [Pg.162]    [Pg.171]    [Pg.545]    [Pg.378]    [Pg.12]    [Pg.264]    [Pg.100]    [Pg.43]   
See also in sourсe #XX -- [ Pg.371 ]




SEARCH



Random errors

© 2024 chempedia.info