Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Types of Errors

95% with an error of 0.04%. Several questions arise from these results. Are the two average values significantly different, or are they indistinguishable within the limits of the experimental errors Are the errors in the two methods significantly different Which of the mean values is closer to the truth Again, Chapter 3 discusses these and related questions. [Pg.3]

These examples represent only a fraction of the possible problems arising from the occurrence of experimental errors in quantitative analysis. But such problems have to be tackled if the quantitative data are to have any real meaning. Clearly, therefore, we must study the various types of error in more detail. [Pg.3]

We can best make this distinction by careful study of a real experimental situation. Four students (A-D) each perform an analysis in which exactly 10.00 ml of exactly 0.1 M sodium hydroxide is titrated with exactly 0.1 M hydrochloric acid. Each student performs five replicate titrations, with the results shown in Table 1.1. [Pg.3]

In most analytical experiments the most important question is - how far is the result from the true value of the concentration or amount that we are trying to measure This is expressed as the accuracy of the experiment. Accuracy is defined by the International Standards Organization (ISO) as the closeness of agreement [Pg.4]

Affect precision - repeatability or reproducibility Cause replicate results to fall on either side of a mean value Can be estimated using replicate measurements Can be minimized by good technique but not eliminated Caused by both humans and equipment Produce bias - an overall deviation of a result from the true value even when random errors are very small Cause all results to be affected in one sense only - all too high or all too low Cannot be detected simply by using replicate measurements Can be corrected, e.g. by using standard methods and materials Caused by both humans and equipment [Pg.4]

We all know that any measurement is affected by errors. If the errors are insignificant, fine. If not, we run the risk of making incorrect inferences based on our experimental results, and maybe arriving at a false solution to our problem. To avoid this unhappy ending, we need to know how to account for the experimental errors. This is important, not only in the analysis of the final result, but also — and principally — in the actual planning of the experiments, as we have already stated. No statistical analysis can salvage a badly designed experimental plan. [Pg.11]

Suppose that during the titration of the vinegar sample our chemist is distracted and forgets to add the proper indicator to the vinegar solution (phenolphthalein, since we know the equivalence point occurs at a basic pH). The consequence is that the end point will never be reached, no matter how much base is added. This clearly would be a serious error, which statisticians charitably label as a gross error. The person responsible for the experiment often uses a different terminology, not fit to print here. [Pg.11]

Statistics is not concerned with gross errors. In fact, the science to treat such mistakes has yet to appear. Little can be done, other than learn the lesson and pay more attention next time. Everyone makes mistakes. The conscientious researcher should strive to do everything possible to avoid committing them. [Pg.11]

Imagine now that the stock of phenolphthalein is depleted and the chemist decides to use another indicator that happens to be available, say, methyl red. Since the pH range for the turning point of methyl red is [Pg.11]

It is easy to imagine other sources of systematic error the primary standard might be out of specification, an anal5dical balance or a pipette might be erroneously cahbrated, the chemist performing the titration might read the meniscus from an incorrect angle and so on. Each of these factors will individually influence the flnal result, always in a characteristic direction. [Pg.12]

Every measurement has some uncertainty, which is called experimental error. Conclusions can be expressed with a high or a low degree of confidence, but never with complete certainty. Experimental error is classified as either systematic or random. [Pg.42]

Systematic error, also called determinate error, arises from a flaw in equipment or the design of an experiment. If you conduct the experiment again in exactly the same manner, [Pg.42]

For example, a pH meter that has been standardized incorrectly produces a systematic error. Suppose you think that the pH of the buffer used to standardize the meter is 7.00, but it is really 7.08. Then all your pH readings will be 0.08 pH unit too low. When you read a pH of 5.60, the actual pH of the sample is 5.68. This systematic error could be discovered by using a second buffer of known pH to test the meter. [Pg.43]

A key feature of systematic error is that it is reproducible. For the buret just discussed, the error is always —0.03 mL when the buret reading is 29.43 mL. Systematic error may always be positive in some regions and always negative in others. With care and cleverness, you can detect and correct a systematic error. [Pg.43]

Precision describes the reproducibility of a result. If you measure a quantity several times and the values agree closely with one another, your measurement is precise. If the values vary widely, your measurement is not precise. Accuracy describes how close a measured value is to the true value. If a known standard is available (such as a Standard Reference Material described in Box 3-1), accuracy is how close your value is to the known value. [Pg.43]

Systematic error might be positive in some regions and negative in others. The error is repeatable and, with care and cleverness, you can detect and correct it. [Pg.59]

Number of digits in antilog x (= lof) = number of digits in mantissa oix  [Pg.59]

Systematic error is a consistent error that can be detected and corrected. Standard Reference Materials described in Box 3-1 are designed to reduce systematic errors. Box 3-2 provides a case study. [Pg.59]

Ghaleb and Wong s review (2006) demonstrates that the spontaneous reporting systems tend to yield a lower rate of paediatric medication errors than the other methods. This is due to underestimation and under-reporting. In contrast, observation methods tend to find higher incidences than the other two methods. These published reports confirm that paediatric medication errors are at least as common as errors in adults. A study by Kaushal and colleagues (2001) has shown that potential adverse drug events may be three times more common in children than in adults. [Pg.29]

The majority of paediatric medications do not result in harm. Blum and co-workers (1988) reported that only 0.2% of the errors could be classified as potentially lethal, whereas Folli et al. (1987) reported 5.6% as potentially lethal. Interestingly, no actual harm to children was reported in most of the epidemiological studies. This might be because the errors were identified and rectified before any harm resulted, but it could be due to publication bias - some healthcare providers may be reluctant to publish studies reporting patients with serious harm. [Pg.29]

Cousins et al. (2002) conducted an analysis of press reports highlighting the outcomes of 24 cases of paediatric medication errors (Table 3.4). Most of the cases reported resulted in fatal consequences, hence making the news headlines. [Pg.29]

The review by Wong and colleagues concluded that the most common type of paediatric medication errors are dosing errors, especially tenfold errors (Wong et al., 2004). Other paediatric medication errors have been reported in the literature, including  [Pg.29]

Wrong route of administration Wrong transcription or documentation Incorrect or missing date Wrong frequency of administration Missed dose Wrong patient [Pg.29]


The second type of error occurs when the null hypothesis is retained even though it is false and should be rejected. This is known as a type 2 error, and its probability of occurrence is [3. Unfortunately, in most cases [3 cannot be easily calculated or estimated. [Pg.84]

The probability of a type 1 error is inversely related to the probability of a type 2 error. Minimizing a type 1 error by decreasing a, for example, increases the likelihood of a type 2 error. The value of a chosen for a particular significance test, therefore, represents a compromise between these two types of error. Most of the examples in this text use a 95% confidence level, or a = 0.05, since this is the most frequently used confidence level for the majority of analytical work. It is not unusual, however, for more stringent (e.g. a = 0.01) or for more lenient (e.g. a = 0.10) confidence levels to be used. [Pg.85]

Sources of Error. pH electrodes are subject to fewer iaterfereaces and other types of error than most potentiometric ionic-activity sensors, ie, ion-selective electrodes (see Electro analytical techniques). However, pH electrodes must be used with an awareness of their particular response characteristics, as weU as the potential sources of error that may affect other components of the measurement system, especially the reference electrode. Several common causes of measurement problems are electrode iaterferences and/or fouling of the pH sensor, sample matrix effects, reference electrode iastabiHty, and improper caHbration of the measurement system (12). [Pg.465]

The composite envelope is then plotted over the envelope of each individual peak. It is seen that the actual retention difference, if taken from the maxima of the envelope, will give a value of less than 80% of the true retention difference. Furthermore as the peaks become closer this error increases rapidly. Unfortunately, this type of error is not normally taken into account by most data processing software. It follows that, if such data was used for solute identification, or column design, the results can be grossly in error. [Pg.168]

Some of the performance-shaping factors (PSFs) affect a whole task or the whole procedure, whereas others affect certain types of errors, regardless of the tasks in which they occur. Still other PSFs have an overriding influence on the probability of all types of error in all conditions. [Pg.175]

Analyses used to determine the number of opportunities for each type of error to occur,... [Pg.176]

The most common types of errors are probably those that occur because operators treat the computer as a black box, that is, something that will do what we want it to do without the need to understand what goes on inside it. There is no fault in the hardware or software, but nevertheless the system does not perform in the way that the designer or oper-... [Pg.354]

Where errors occur that lead to process accidents, it is clearly not appropriate to hold the worker responsible for conditions that are outside his or her control and that induce errors. These considerations suggest that behavior-modification-based approaches will not in themselves eliminate many of the types of errors that can cause major process accidents. [Pg.49]

These explanations do not exhaust the possibilities with regard to underlying causes, but they do illustrate an important point the analysis of human error purely in terms of its external form is not sufficient. If the underlying causes of errors are to be addressed and suitable remedial strategies developed, then a much more comprehensive approach is required. This is also necessary from the predictive perspective. It is only by classifying errors on the basis of underlying causes that specific types of error can be predicted as a function of the specific conditions under review. [Pg.69]

An influential classification of the different types of information processing involved in industrial tasks was developed by J. Rasmussen of the Rise Laboratory in Denmark. This scheme provides a useful framework for identifying the types of error likely to occur in different operational situations, or within different aspects of the same task where different types of information processing demands on the individual may occur. The classification system, known as the skill-, rule-, knowledge-based (SRK) approach is described in a... [Pg.69]

Performance problems may be exacerbated during unfamiliar or novel process events, for example, situations not covered in the emergency procedures or in refresher training. These events require knowledge-based information processing for which people are not very reliable. The types of errors associated with knowledge-based performance have been discussed in Chapter 2. [Pg.109]

This analysis is applied to each operation at the particular level of the HTA being evaluated. In most cases the analysis is performed at the level of a step, for example. Open valve 27B. For each operation, the analyst considers the likelihood that one or more of the error types set out in classification in Figure 5.7 could occur. This decision is made on the basis of the information supplied by the PIF analysis, and the analyst s knowledge concerning the types of error likely to arise given the nature of the mental and physical demands of the task and the particular configuration of PIFs that exist in the situation. The different error categories are described in more detail below ... [Pg.214]

Generally, risk assessment has focused on the first type of error, since the main interest in human reliability was in the context of human actions that were required as part of an emergency response. However, a comprehensive Consequence Analysis has to also consider other types, since both of these outcomes could constitute sources of risk to the individual or the plant. [Pg.216]

Based on the analyst s experience, or upon error theory, it is possible to assign weights to the various PIFs to represent the relative influence that each PIF has on all the tasks in the set being evaluated. In this example it is assumed that in general the level of experience has the least influence on these types of errors, and time stress the most iirfluence. The relative effects of the different PIFs can be expressed by the following weights ... [Pg.236]

To prevent this type of error, the balancer operators and those who do final assembly should follow the following procedure. The balancer operator should permanently mark the location of the contact point between the bore and the shaft during balancing. When the equipment is reassembled in the plant or the shop, the assembler should also use this mark. For end-clamped rotors, the assembler should slide the bore on the horizontal shaft, rotating both until the mark is at the 12 o clock position and then clamp it in place. [Pg.936]

The absolute value of a proportional error depends upon the amount of the constituent. Thus a proportional error may arise from an impurity in a standard substance, which leads to an incorrect value for the molarity of a standard solution. Other proportional errors may not vary linearly with the amount of the constituent, but will at least exhibit an increase with the amount of constituent present. One example is the ignition of aluminium oxide at 1200°C the aluminium oxide is anhydrous and virtually non-hygroscopic ignition of various weights at an appreciably lower temperature will show a proportional type of error. [Pg.128]

Since the machine performs only arithmetic operations (and these only approximately), iff is anything but a rational function it must be approximated by a rational function, e.g., by a finite number of terms in a Taylor expansion. If this rational approximation is denoted by fat this gives rise to an error fix ) — fa(x ), generally called the truncation error. Finally, since even the arithmetic operations are carried out only approximately in the machine, not even fjx ) can usually be found exactly, and still a third type of error results, fa(x ) — / ( ) called generated error, where / ( ) is the number actually produced by the machine. Thus, the total error is the sum of these... [Pg.52]

When one attempts to estimate some parameter, the possibility of error is implicitly assumed. What sort of errors are possible Why is it necessary to distinguish between two types of error Reality (as hindsight would show later on, but unknown at the time) could be red or blue, and by the same token, any assumptions or decisions reached at the time were either red ... [Pg.87]

A dramatic example of this type of error was discussed by Donigian, (6j at the Pellston workshop based on the Iowa study described earlier (8 ) Figure 3 shows the calibration (top figure, 1978 data) and verification (bottom figure, 1978) results. A simulated alachlor concentration value of greater than 0.1 mg/1 occurred on May 27, 1978, (top figure) whereas the observed... [Pg.161]

Table 4.2. Types of errors for statistical tests of null hypotheses... [Pg.106]

The corresponding measured value at LD (see Table 7.5) is not of crucial importance in analytical chemistry. It characterizes that signal which can significantly be distinguished from the blank considering both types of error (a and / ). [Pg.230]

Random error arises as the result of chance variations in factors that influence the value of the quantity being measured but which are themselves outside of the control of the person making the measurement. Such things as electrical noise and thermal effects contribute towards this type of error. Random error causes results to vary in an unpredictable way from one measurement to the next. It is therefore not possible to correct individual results for random error. However, since random error should sum to zero over many measurements, such an error can be reduced by making repeated measurements and calculating the mean of the results. [Pg.158]

In contrast, a systematic error remains constant or varies in a predictable way over a series of measurements. This type of error differs from random error in that it cannot be reduced by making multiple measurements. Systematic error can be corrected for if it is detected, but the correction would not be exact since there would inevitably be some uncertainty about the exact value of the systematic error. As an example, in analytical chemistry we very often run a blank determination to assess the contribution of the reagents to the measured response, in the known absence of the analyte. The value of this blank measurement is subtracted from the values of the sample and standard measurements before the final result is calculated. If we did not subtract the blank reading (assuming it to be non-zero) from our measurements, then this would introduce a systematic error into our final result. [Pg.158]

Figure 6.11 illustrates the difference between these two main types of error, using the example of delivering liquid from a 25 ml Class A pipette. [Pg.158]

For any given measurement process, more than one instance of each type of error can apply. Therefore, errors are insufficient to describe the quality of a measurement result. Measurement uncertainty, on the other hand, combines into a single range the effect of all of the different factors that can influence a measurement result. [Pg.159]

The accident could have been prevented with better operating procedures and better training to make the operators appreciate the consequences of mistakes. Modern plants use interlocks or sequence controllers and other special safeguards to prevent this type of error. [Pg.553]


See other pages where Types of Errors is mentioned: [Pg.303]    [Pg.364]    [Pg.2572]    [Pg.177]    [Pg.255]    [Pg.66]    [Pg.67]    [Pg.68]    [Pg.228]    [Pg.288]    [Pg.227]    [Pg.43]    [Pg.633]    [Pg.306]    [Pg.378]    [Pg.157]    [Pg.200]    [Pg.386]    [Pg.55]    [Pg.159]    [Pg.161]    [Pg.172]    [Pg.322]    [Pg.76]    [Pg.194]   


SEARCH



Inflation of the type I error

Probability of making a Type I error

Three Types of Error

Types of sampling error

© 2024 chempedia.info