Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error detection 4, Chapter

During the PHEA stage, the analyst has to identify likely human errors and possible ways of error detection and recovery. The PHEA prompts the analyst to examine the main performance-influencing factors (PIFs) (see Chapter 3) which can contribute to critical errors. All the task steps at the bottom level of the HTA are analyzed in turn to identify likely error modes, their potential for recovery, their safety or quality consequences, and the main performance-influencing factors (PIFs) which can give rise to these errors. In this case study, credible errors were found for the majority of the task steps and each error had multiple causes. An analysis of two operations from the HTA is presented to illustrate the outputs of the PHEA. Figure 7.12 shows a PHEA of the two following tasks Receive instructions to pump and Reset system. [Pg.321]

The sequential procedure can be implemented on-line, in real time, for any processing plant without much computational effort. Furthermore, by sequentially deleting one measurement at a time, it is possible to quantify the effect of that measurement on the reconciliation procedure, making this approach very suitable for gross error detection/identification, as discussed in the next chapter. [Pg.124]

We have discussed, in Chapter 7, a number of auxiliary gross error detection/ identification/estimation schemes, for identifying and removing the gross errors from the measurements, such that the normality assumption holds. Another approach is to take into account the presence of gross errors from the beginning, using, for example,... [Pg.218]

The first case study consists of a section of an olefin plant located at the Orica Botany Site in Sydney, Australia. In this example, all the theoretical results discussed in Chapters 4,5,6, and 7 for linear systems are fully exploited for variable classification, system decomposition, and data reconciliation, as well as gross error detection and identification. [Pg.246]

A data reconciliation procedure was applied to the subset of redundant equations. The results are displayed in Table 4. A global test for gross error detection was also applied and the x2 value was found to be equal to 17.58, indicating the presence of a gross error in the data set. Using the serial elimination procedure described in Chapter 7, a gross error was identified in the measurement of stream 26. The procedure for estimating the amount of bias was then applied and the amount of bias was found... [Pg.251]

This chapter presents an exploratoiy study of the interaction between error type and error detection mechanism in the context of normal flight operations within the commercial airline environment. [Pg.109]

This chapter discusses aspects of quality assurance of equipment as used in analytical laboratories. It provides guidelines on how to select a vendor and for installation and operational qualifications, ongoing performance control, maintenance, and error detection and handling that contribute to assuring the quality of analytical laboratory data. It refers mainly to an automated chromatography system as an example, but similar principles can be applied to other instrumentation. [Pg.23]

After the reconcilation is effected, adjusted values are substituted into equations of the horizontal band 1, and the unmeasured values (vertical band 1) are simply computed. In a similar way, also a complex processing of the measured data (propagation of random errors, detection of gross errors, etc.) can be realized see again Chapter 9, or for example Madron (1992). Measured data inconsistency is analyzed using the equations from bands 3 and 4a only. The remaining equations contain no information in this respect. [Pg.449]

TRUE COINCIDENCE SUMMING (TCS) The simultaneous detection of two or more photons originating from a single nuclear disintegration that results in only one observed (summed) peak. This results in loss of counts from peaks leading to efficiency calibration errors. (See Chapter 8.)... [Pg.380]

Outliers demand special attention in chemometrics for several different reasons. During model development, their extremeness often gives them an unduly high influence in the calculation of the calibration model. Therefore, if they represent erroneous readings, then they will add disproportionately more error to the calibration model. Furthermore, even if they represent informative information, it might be determined that this specific information is irrelevant to the problem. Outliers are also very important during model deployment, because they can be informative indicators of specific failures or abnormalities in the process being sampled, or in the measurement system itself. This use of outlier detection is discussed in the Model Deployment section (12.10), later in this chapter. [Pg.413]

Structured root cause analysis uncovers the underlying reasons for human error and consequently provides guidance on suitable corrective actions. Humans make errors. Our task is to design systems that detect and correct an error before it leads to a serious consequence. Chapter 6 provides extensive information applicable during root cause analysis. [Pg.247]

However, LFERs of the type of Eq. 3-56 may not only be used as predictive tools they may also serve other purposes. For example, they may be very helpful to check reported experimental data for consistency (i.e., to detect experimental errors). They may also enable us to discover unexpected partitioning behavior of a given compound, for example, if a compound is an outlier, but, based on its structure, is expected to fit the LFER. Finally, as will be discussed in various other chapters, if for a given set of model compounds such LFERs have been established for various two-phase systems, where one of the phases is not very well characterized (e.g., various natural organic matter-water systems, different atmospheric particle-air systems), the slopes a of the respective LFERs may yield some important information on the nature of the phases considered (e.g., to detect differences or similarities among the phases). [Pg.91]

We can usually estimate or measure the random error associated with a measurement, such as the length of an object or the temperature of a solution. The uncertainty might be based on how well we can read an instrument or on our experience with a particular method. If possible, uncertainty is expressed as the standard deviation or as a confidence interval, which are discussed in Chapter 4. This section applies only to random error. We assume that systematic error has been detected and corrected. [Pg.44]


See other pages where Error detection 4, Chapter is mentioned: [Pg.479]    [Pg.13]    [Pg.598]    [Pg.488]    [Pg.493]    [Pg.7]    [Pg.4]    [Pg.683]    [Pg.813]    [Pg.276]    [Pg.345]    [Pg.249]    [Pg.413]    [Pg.193]    [Pg.819]    [Pg.4]    [Pg.231]    [Pg.17]    [Pg.348]    [Pg.92]    [Pg.420]    [Pg.541]    [Pg.144]    [Pg.441]    [Pg.82]    [Pg.2]    [Pg.173]    [Pg.185]    [Pg.137]    [Pg.806]    [Pg.105]    [Pg.366]    [Pg.75]    [Pg.437]   
See also in sourсe #XX -- [ Pg.8 ]




SEARCH



Error detection

© 2024 chempedia.info