Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Calibration, statistical calculation

We will not repeat Anscombe s presentation, but we will describe what he did, and strongly recommend that the original paper be obtained and perused (or alternatively, the paper by Fearn [15]). In his classic paper, Anscombe provides four sets of (synthetic, to be sure) univariate data, with obviously different characteristics. The data are arranged so as to permit univariate regression to be applied to each set. The defining characteristic of one of the sets is severe nonlinearity. But when you do the regression calculations, all four sets of data are found to have identical calibration statistics the slope, y-intercept, SEE, R2, F-test and residual sum of squares are the same for all four sets of data. Since the numeric values that are calculated are the same for all data sets, it is clearly impossible to use these numeric values to identify any of the characteristics that make each set unique. In the case that is of interest to us, those statistics provide no clue as to the presence or absence of nonlinearity. [Pg.425]

The following is an example of a mathematical/statistical calculation of a calibration curve to test for true slope, residual standard deviation, confidence interval and correlation coefficient of a curve for a fixed or relative bias. A fixed bias means that all measurements are exhibiting an error of constant value. A relative bias means that the systematic error is proportional to the concentration being measured i.e. a constant proportional increase with increasing concentration. [Pg.92]

After the deconvolution step, giving the contribution coefficients of reference spectra (see Chapter 2), the parameters calculation is possible by using the same coefficients and a corresponding calibration file (Fig. 11). This calibration file includes the corresponding concentrations for specific compounds (nitrate, nitrite, anionic surfactants, etc.) and the values related to the reference spectra of mixtures (Table 3). The latter are statistically calculated for the purpose, through a preliminary stepwise regression study, from a set of at least 30 samples (with 30 corresponding values of parameters and 30 sets of contribution coefficients). [Pg.98]

The first three methods are very similar to the methods used for other spectroscopic methods. Statistical calculation methods can be used only in modem x-ray fluorescence instruments that come with the appropriate software. Different manufacturers or companies use different algorithms their instruments. The main purpose of this software is to minimize the influence of measurement errors when computing the results. A wide variety of statistical methods are available. The statistical calculation method saves a lot of experimenting time, because only the analysis of the sample is needed for every analysis. Calibration or analyses of sample with added substance is not required in this case. [Pg.145]

By calibrating with solutions of known molality and statistical calculations of the measured value, the nonlinear behavior may be eliminated. Figure 8.15 shows an example of the calibration line for benzile, which was used in our investigation. Kcai (calibration coefficient) is defined from the calibration line, as is shown in Figure 8.15. The molecular weight is calculated by equation (8.7) ... [Pg.355]

The choice of resolution of instmments ranges from low-resolution student instmments, such as the Spectronic 20 with 20 nm resolution to 0.05 nm double-grating research instmments. The typical instmment will have a resolution of about 2 nm, and built-in software that allows calibration with multiple standards, polynomial curves, and statistical calculations. [Pg.497]

Prepare a spreadsheet to construct a calibration curve of the ratio of the EtOH/PrOH areas vs. EtOH concentration, and to calculate the unknown concentration and its standard deviation. Refer to Equations 16.18 to 16.21 and the spreadsheet that follows in Chapter 16 for a refresher on the statistical calculations. [Pg.589]

Atomic methods also benefit from the incorporation of microcomputers into the instruments. Figure 10.7 shows the scheme of an atomic absorption spectrometer with a built-in microprocessor which controls the signal from the detector, previously amplified and converted to digital form. A series of ROMs store the programs for zero-setting, calibration, statistical treatment and calculation of integration areas and times. The microprocessor modulates the... [Pg.283]

With many methods a calibration curve is needed. For statistical calculations a linear curve is preferable because of the simple statistics. Also the linear part of a further non-linear curve can be used for the calculations. Almost exclusively the method of linear regression by least squares approximation is used. [Pg.261]

In addition to laboratoiy glassware and equipment necessary fOT cleanup of the extract, traditional pesticide residue methods require expensive chromatogrsqihic instrumentation for identification and quantitation of residues. EIA methods require minimal amounts of glassware, disposable plasticware, or other supplies. Quantitative EIAs often make use of 96-well microtiter plates fOT multiple simultaneous assays of about a dozen extracts and associated reference standard. Major equipment consists of a plate reader, which automatically measures the absorbance of each well. Plate readers can be used alone or in conjunction with a personal computer, which can correlate replicate measurements, construct the calibration curve, calculate results, and provide a complete statistical analysis. Such an EIA workstation can be obtained for roughly half the cost of the GC or HPLC system typically used for pesticide residue analysis. [Pg.53]

Include all calibration plots calculate the correlation coefficient for both calibration plots. Determine the precision and accuracy for triplicate injections of the ICV for both calibrations. Report on the concentration in ppm for all unknown samples analyzed. This is an example in which two different methods are used to conduct analysis of the same unknown sample. Use the statistical evaluation for comparing two dependent averages (paired data) to determine whether the two different methods give the same result. You may want to include a representative chromatogram obtained from the gasoline-contaminated sample. Ask your lab instructor to assist you in obtaining a hardcopy from the laboratory printers. [Pg.517]

Include all calibration data, ICVs, and sample unknowns for both instrumental methods. Perform a statistical evaluation in a manner that is similar to previous experiments. Use EXCEL or LSQUARES (refer to Appendix C) or other computer programs to conduct a least squares regression analysis of the calibration data. Calculate the accuracy (expressed as a percent relative error for the ICV) and the precision (relative standard deviation for the ICV) from both instrumental methods. Calculate the percent recovery for the matrix spike and matrix spike duplicate. Report on the concentration of Cr in the unknown soil samples. Be aware of all dflution factors and concentrations as you perform calculations ... [Pg.527]

Flow of the carrier gas (nitrogen), 35ml/min column temperature 170, 210, 230 and 290 °C. Detector temperature (flame ionization), 300 °C, sample size, 0.3 il. In order to determine Kovats indices (I) of Me3Si derivatives, several mixtures of n-alkaline (C10-C24) are injected under the same operating conditions. The Kovats indices are a means of calibrating the columns. The gas holdup time is calculated by the Peterson and Hirsch method (1959), and (I) values are statistically calculated by the computer methods of Grobler and Balisz (1974). [Pg.138]

The avoidance of a need to select wavelengths, however, is replaced by a corresponding requirement the need to select the principal components to include in the calibration. However, it was previously shown that this problem is much easier to solve. The orthogonality of the principal components makes the use of the t statistic calculated for each coefficient of the principal component scores an eminently efficacious method of selection [7]. [Pg.185]

Second, in a condition-based maintenance context, the simple description, flexibility, calibration and statistical calculation make this model easy to implement and beneficial to utilize in a risk management framework. The evaluation of these meta-models is done through state-dependent stochastic processes using information issued from non destructive techniques. The idea is to facilitate the transfer between available information and model. [Pg.2194]

Recording and processing of retention and calibration data, calculations and statistical assessment of results. [Pg.148]

ProjectLeader allows the user to predict a wide variety of further chemical and physical properties for small molecules, including those containing metals, by developing their own QSPR models. Statistical tools provided to assist in this process include simple multiple and stepwise regressions. They can also be used to calibrate the calculated results to experimental data in order to improve the accuracy of the predictions. Examples of predicted properties include quantities such as boiling points, toxicity, antibacterial activity, acidity and basicity, vapor pressure, and water solubility. [Pg.3290]

Precision—The results from six laboratories were used to generate statistical data. Three values were recorded for each sample. The standard used to calibrate a standard curve was provided with the samples and a volume of 40 j,L was specified for all injections. For statistical calculations, the average value obtained on the neat (or blank) sample was... [Pg.980]

Figure 4.31. Key statistical indicators for validation experiments. The individual data files are marked in the first panels with the numbers 1, 2, and 3, and are in the same sequence for all groups. The lin/lin respectively log/log evaluation formats are indicated by the letters a and b. Limits of detection/quantitation cannot be calculated for the log/log format. The slopes, in percent of the average, are very similar for all three laboratories. The precision of the slopes is given as 100 t CW b)/b in [%]. The residual standard deviation follows a similar pattern as does the precision of the slope b. The LOD conforms nicely with the evaluation as required by the FDA. The calibration-design sensitive LOQ puts an upper bound on the estimates. The XI5% analysis can be high, particularly if the intercept should be negative. Figure 4.31. Key statistical indicators for validation experiments. The individual data files are marked in the first panels with the numbers 1, 2, and 3, and are in the same sequence for all groups. The lin/lin respectively log/log evaluation formats are indicated by the letters a and b. Limits of detection/quantitation cannot be calculated for the log/log format. The slopes, in percent of the average, are very similar for all three laboratories. The precision of the slopes is given as 100 t CW b)/b in [%]. The residual standard deviation follows a similar pattern as does the precision of the slope b. The LOD conforms nicely with the evaluation as required by the FDA. The calibration-design sensitive LOQ puts an upper bound on the estimates. The XI5% analysis can be high, particularly if the intercept should be negative.
The different statistical character of the three variables becomes most clear in the different uncertainties of the calibration and evaluation lines. Notwithstanding the fundamental differences between xstandard and xsampiey the calculation of the calibration coefficients is carried out by regression calculus. [Pg.152]

Another way is the robust parameter estimation on the basis of median statistics (see Sect. 4.1.2 Danzer [1989] Danzer and Currie [1998]). For this, all possible slopes between all the calibration points bij = (yj — yi)/(xj — X ) for j > i are calculated. After arranging the b j according to increasing values, the average slope can be estimated as the median by... [Pg.171]

In this chapter as a continuation of Chapters 58 and 59 [1, 2], the confidence limits for the correlation coefficient are calculated for a user-selected confidence level. The user selects the test correlation coefficient, the number of samples in the calibration set, and the confidence level. A MathCad Worksheet ( MathSoft Engineering Education, Inc., 101 Main Street, Cambridge, MA 02142-1521) is used to calculate the z-statistic for the lower and upper limits and computes the appropriate correlation for the z-statistic. The upper and lower confidence limits are displayed. The Worksheet also contains the tabular calculations for any set of correlation coefficients (given as p). A graphic showing the general case entered for the table is also displayed. [Pg.393]


See other pages where Calibration, statistical calculation is mentioned: [Pg.91]    [Pg.57]    [Pg.3611]    [Pg.54]    [Pg.38]    [Pg.300]    [Pg.386]    [Pg.97]    [Pg.106]    [Pg.107]    [Pg.131]    [Pg.139]    [Pg.141]    [Pg.89]    [Pg.336]    [Pg.336]    [Pg.123]    [Pg.155]    [Pg.161]    [Pg.162]    [Pg.383]    [Pg.431]    [Pg.472]    [Pg.492]    [Pg.497]    [Pg.217]   
See also in sourсe #XX -- [ Pg.398 ]




SEARCH



Calibration statistics

© 2024 chempedia.info