Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Standard deviation analysis

The uncertainties given are calculated standard deviations. Analysis of the interatomic distances yields a selfconsistent interpretation in which Zni is assumed to be quinquevalent and Znn quadrivalent, while Na may have a valence of unity or one as high as lj, the excess over unity being suggested by the interatomic distances and being, if real, presumably a consequence of electron transfer. A valence electron number of approximately 432 per unit cell is obtained, which is in good agreement with the value 428-48 predicted on the basis of a filled Brillouin polyhedron defined by the forms 444, 640, and 800. ... [Pg.597]

In the first instance, when the results were analyzed by simple mean and standard deviation analysis, Amico et al. [16-18] got large relative standard deviatiOTi, indicating limitatimi of this method for the proper characterizatiOTi of the diameter. Then, they used Weibull probability density and cumulative distribution functions [20,56,58] to estimate two parameters, the characteristic life and a dimensionless positive pure number, which were supposed to determine the shape and scale of the distribution curve. For this, they adopted two methods, the maximum likelihood technique, which requires the solution of two nonlinear equations, and the analytical method using the probability plot as mentioned earlier for coir fibers. [Pg.229]

Multiple linear regression analysis is a widely used method, in this case assuming that a linear relationship exists between solubility and the 18 input variables. The multilinear regression analy.si.s was performed by the SPSS program [30]. The training set was used to build a model, and the test set was used for the prediction of solubility. The MLRA model provided, for the training set, a correlation coefficient r = 0.92 and a standard deviation of, s = 0,78, and for the test set, r = 0.94 and s = 0.68. [Pg.500]

The first application of the Gaussian distribution is in medical decision making or diagnosis. We wish to determine whether a patient is at risk because of the high cholesterol content of his blood. We need several pieces of input information an expected or normal blood cholesterol, the standard deviation associated with the normal blood cholesterol count, and the blood cholesterol count of the patient. When we apply our analysis, we shall anive at a diagnosis, either yes or no, the patient is at risk or is not at risk. [Pg.17]

Precision is a measure of the spread of data about a central value and may be expressed as the range, the standard deviation, or the variance. Precision is commonly divided into two categories repeatability and reproducibility. Repeatability is the precision obtained when all measurements are made by the same analyst during a single period of laboratory work, using the same solutions and equipment. Reproducibility, on the other hand, is the precision obtained under any other set of conditions, including that between analysts, or between laboratory sessions for a single analyst. Since reproducibility includes additional sources of variability, the reproducibility of an analysis can be no better than its repeatability. [Pg.62]

To evaluate the effect of indeterminate error on the data in Table 4.1, ten replicate determinations of the mass of a single penny were made, with results shown in Table 4.7. The standard deviation for the data in Table 4.1 is 0.051, and it is 0.0024 for the data in Table 4.7. The significantly better precision when determining the mass of a single penny suggests that the precision of this analysis is not limited by the balance used to measure mass, but is due to a significant variability in the masses of individual pennies. [Pg.63]

Uncertainty expresses the range of possible values that a measurement or result might reasonably be expected to have. Note that this definition of uncertainty is not the same as that for precision. The precision of an analysis, whether reported as a range or a standard deviation, is calculated from experimental data and provides an estimation of indeterminate error affecting measurements. Uncertainty accounts for all errors, both determinate and indeterminate, that might affect our result. Although we always try to correct determinate errors, the correction itself is subject to random effects or indeterminate errors. [Pg.64]

It is unclear, however, how many degrees of freedom are associated with f(a, v) since there are two sets of independent measurements. If the variances sa and sb estimate the same O, then the two standard deviations can be factored out of equation 4.19 and replaced by a pooled standard deviation. Spool, which provides a better estimate for the precision of the analysis. Thus, equation 4.19 becomes... [Pg.89]

Vitha, M. F. Carr, P. W. A Laboratory Exercise in Statistical Analysis of Data, /. Chem. Educ. 1997, 74, 998-1000. Students determine the average weight of vitamin E pills using several different methods (one at a time, in sets of ten pills, and in sets of 100 pills). The data collected by the class are pooled together, plotted as histograms, and compared with results predicted by a normal distribution. The histograms and standard deviations for the pooled data also show the effect of sample size on the standard error of the mean. [Pg.98]

There is an obvious similarity between equation 5.15 and the standard deviation introduced in Chapter 4, except that the sum of squares term for Sr is determined relative toy instead of y, and the denominator is - 2 instead of - 1 - 2 indicates that the linear regression analysis has only - 2 degrees of freedom since two parameters, the slope and the intercept, are used to calculate the values ofy . [Pg.121]

In Example 7.6 we found that an analysis for the inorganic ash content of a breakfast cereal required a sample of 1.5 g to establish a relative standard deviation for sampling of 2.0%. How many samples are needed to obtain a relative sampling error of no more than 0.80% at the 95% conhdence level ... [Pg.191]

In this problem you will collect and analyze data in a simulation of the sampling process. Obtain a pack of M M s or other similar candy. Obtain a sample of five candies, and count the number that are red. Report the result of your analysis as % red. Return the candies to the bag, mix thoroughly, and repeat the analysis for a total of 20 determinations. Calculate the mean and standard deviation for your data. Remove all candies, and determine the true % red for the population. Sampling in this exercise should follow binomial statistics. Calculate the expected mean value and expected standard deviation, and compare to your experimental results. [Pg.228]

Precision When the analyte s concentration is well above the detection limit, the relative standard deviation for fluorescence is usually 0.5-2%. The limiting instrumental factor affecting precision is the stability of the excitation source. The precision for phosphorescence is often limited by reproducibility in preparing samples for analysis, with relative standard deviations of 5-10% being common. [Pg.432]

Precision The precision of a gas chromatographic analysis includes contributions from sampling, sample preparation, and the instrument. The relative standard deviation due to the gas chromatographic portion of the analysis is typically 1-5%, although it can be significantly higher. The principal limitations to precision are detector noise and the reproducibility of injection volumes. In quantitative work, the use of an internal standard compensates for any variability in injection volumes. [Pg.577]

Single-operator characteristics are determined by analyzing a sample whose concentration of analyte is known to the analyst. The second step in verifying a method is the blind analysis of standard samples where the analyte s concentration remains unknown to the analyst. The standard sample is analyzed several times, and the average concentration of the analyte is determined. This value should be within three, and preferably two standard deviations (as determined from the single-operator characteristics) of the analyte s known concentration. [Pg.683]

After each of the effects is calculated, they are ranked from largest to smallest, without regard to sign, and those factors whose effects are substantially larger than the other factors are identified. The estimated standard deviation for the analysis is given by... [Pg.685]

When an analyst performs a single analysis on a sample, the difference between the experimentally determined value and the expected value is influenced by three sources of error random error, systematic errors inherent to the method, and systematic errors unique to the analyst. If enough replicate analyses are performed, a distribution of results can be plotted (Figure 14.16a). The width of this distribution is described by the standard deviation and can be used to determine the effect of random error on the analysis. The position of the distribution relative to the sample s true value, p, is determined both by systematic errors inherent to the method and those systematic errors unique to the analyst. For a single analyst there is no way to separate the total systematic error into its component parts. [Pg.687]

The goal of a collaborative test is to determine the expected magnitude of ah three sources of error when a method is placed into general practice. When several analysts each analyze the same sample one time, the variation in their collective results (Figure 14.16b) includes contributions from random errors and those systematic errors (biases) unique to the analysts. Without additional information, the standard deviation for the pooled data cannot be used to separate the precision of the analysis from the systematic errors of the analysts. The position of the distribution, however, can be used to detect the presence of a systematic error in the method. [Pg.687]

The principal tool for performance-based quality assessment is the control chart. In a control chart the results from the analysis of quality assessment samples are plotted in the order in which they are collected, providing a continuous record of the statistical state of the analytical system. Quality assessment data collected over time can be summarized by a mean value and a standard deviation. The fundamental assumption behind the use of a control chart is that quality assessment data will show only random variations around the mean value when the analytical system is in statistical control. When an analytical system moves out of statistical control, the quality assessment data is influenced by additional sources of error, increasing the standard deviation or changing the mean value. [Pg.714]

The degree of data spread around the mean value may be quantified using the concept of standard deviation. O. If the distribution of data points for a certain parameter has a Gaussian or normal distribution, the probabiUty of normally distributed data that is within Fa of the mean value becomes 0.6826 or 68.26%. There is a 68.26% probabiUty of getting a certain parameter within X F a, where X is the mean value. In other words, the standard deviation, O, represents a distance from the mean value, in both positive and negative directions, so that the number of data points between X — a and X -H <7 is 68.26% of the total data points. Detailed descriptions on the statistical analysis using the Gaussian distribution can be found in standard statistics reference books (11). [Pg.489]

A study was conducted to measure the concentration of D-fenfluramine HCl (desired product) and L-fenfluramine HCl (enantiomeric impurity) in the final pharmaceutical product, in the possible presence of its isomeric variants (57). Sensitivity, stabiUty, and specificity were enhanced by derivatizing the analyte with 3,5-dinitrophenylisocyanate using a Pirkle chiral recognition approach. Analysis of the caUbration curve data and quaUty assurance samples showed an overall assay precision of 1.78 and 2.52%, for D-fenfluramine HCl and L-fenfluramine, with an overall intra-assay precision of 4.75 and 3.67%, respectively. The minimum quantitation limit was 50 ng/mL, having a minimum signal-to-noise ratio of 10, with relative standard deviations of 2.39 and 3.62% for D-fenfluramine and L-fenfluramine. [Pg.245]

Investigated is the influence of the purity degree and concentration of sulfuric acid used for samples dissolution, on the analysis precision. Chosen are optimum conditions of sample preparation for the analysis excluding loss of Ce(IV) due to its interaction with organic impurities-reducers present in sulfuric acid. The photometric technique for Ce(IV) 0.002 - 0.1 % determination in alkaline and rare-earth borates is worked out. The technique based on o-tolidine oxidation by Ce(IV). The relative standard deviation is 0.02-0.1. [Pg.198]

To determine of Ce(IV) in acid soluble single crystals, a simple and sensitive method is proposed. The method is based on the reaction of tropeoline 00 oxidation by cerium(IV) in sulfuric acid solution with subsequent measurement of the light absorption decrease of the solution. The influence of the reagent concentration on the analysis precision is studied. The procedure for Ce(IV) determination in ammonium dihydrophosphate doped by cerium is elaborated. The minimal determined concentration of cerium equal to 0.04 p.g/ml is lower than that of analogous methods by a factor of several dozens. The relative standard deviation does not exceed 0.1. [Pg.198]

The complex of the following destmctive and nondestmctive analytical methods was used for studying the composition of sponges inductively coupled plasma mass-spectrometry (ICP-MS), X-ray fluorescence (XRF), electron probe microanalysis (EPMA), and atomic absorption spectrometry (AAS). Techniques of sample preparation were developed for each method and their metrological characteristics were defined. Relative standard deviations for all the elements did not exceed 0.25 within detection limit. The accuracy of techniques elaborated was checked with the method of additions and control methods of analysis. [Pg.223]

If optimization is aehieved using the proeesses ehosen, then the standard deviation estimated for eaeh eomponent toleranee ean be eompared to the required assembly standard deviation to see if overall eapability on the assembly toleranee has been aehieved. If exeessive variability is estimated at this stage on one or two eharaeteristies, then redesign will need to be performed. Guidanee for redesign ean be simplified by using sensitivity analysis, used to estimate the pereentage eontribution of the varianee of eaeh eomponent toleranee to the assembly varianee, where the varianee equals cr. It follows that from equation 3.3 ... [Pg.120]

Part of the design information provided by the software is the standard deviation multiplier, z, for eaeh eomponent toleranee shown in Figure 3.14 in Pareto ehart form. Additionally, sensitivity analysis is used to provide a pereentage eontribution of eaeh toleranee varianee to the final assembly toleranee varianee as shown in Figure 3.15. [Pg.127]


See other pages where Standard deviation analysis is mentioned: [Pg.83]    [Pg.83]    [Pg.494]    [Pg.603]    [Pg.682]    [Pg.69]    [Pg.86]    [Pg.180]    [Pg.180]    [Pg.180]    [Pg.181]    [Pg.228]    [Pg.228]    [Pg.648]    [Pg.664]    [Pg.708]    [Pg.709]    [Pg.709]    [Pg.723]    [Pg.779]    [Pg.459]    [Pg.395]    [Pg.346]    [Pg.368]    [Pg.375]   


SEARCH



Standard deviation

Standard deviation standardization

Statistical analysis standard deviation

© 2024 chempedia.info