Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hardware fault detection

In order for the total PFHd value to be sufficient, [lEC 62061] also requires that the internal hardware fault detection of each subsystem is acceptable. Table 5 (Figure 7) in [lEC 62061] defines which requirements are placed on each subsystem depending on the required SIL from the risk analysis. From table 5 (Figure 7) it is also possible to see that the requirements of the... [Pg.269]

For SIL 3 and SIL 4, there needs to be some hardware fault detection on all parts of the system, i.e. sensors, input/output circuits, logic resolver, output elements and both the communication and memory should have error detection. [Pg.84]

In order to provide reliable systems that can cope with radiation effects, we believe that the solution hes in combining software-based and hardware-based techniques. The main objective of this book is finding the best trade-off between software-based and hardware-based techniques, in order to increase existing fault detection rates up to 100%, while maintaining low overheads in performance by means of operating clock frequency and application execution time, and in area by means of program and data memory overhead, and extra hardware modules. [Pg.20]

Following the state-of-the-art review, the next step is to implement fault tolerance techniques. We will start by explaining in detail and implementing two known software-based techniques, called Variables and Inverted Branches (AZAMBUJA 2010b), which will later be used as a complement to hybrid fault tolerance techniques. These techniques have been proposed in the past years and achieved high fault detection rates at low performance degradation levels and therefore are useful not only as an introduction to software-based fault tolerance techniques, but also to be combined with hardware-based and hybrid techniques. Then, three novel hybrid techniques will be proposed and implemented, based on both software and hardware replication characteristics. The three hybrid techniques will be divided into their software and hardware sides and described in detail, concerning both operation description and implementation. [Pg.20]

Hardware based techniques can be based on duplication with comparison, ED AC codes to protect registers and some other logical parity techniques to protect the logic. But all of them have some limitation on fault detection coverage. Without duplicating the whole processor, hardware-based techniques cannot achieve full... [Pg.35]

Techniques based on space redimdancy are grounded in the single fault model, where only one of the hardware redimdant copies is affected by transient upsets (ROSSI 2005). It means that only one of the modules will be affected by a transient fault and therefore the fault detection rate should be 100 %. On the other hand, studies have shown that a single fault may affect two hardware modules in case of SRAM-based FPGAs (KASTENSMIDT 2005) due to the routing of the architecture, or in adjacent standard cells in ASICs, as shown by Almeida (ALMEIDA 2012). [Pg.39]

As stated in the previous Chapters, software-based techniques are unable to detect all faults affecting the control flow, while hardware-based techniques cannot protect processors without at least doubling its area. On the other hand, combined into hybrid techniques, they can not only present increase their detection rates, but also be optimized into achieving a better tradeoff between area overhead and performance degradation, and fault detection. [Pg.44]

Table 4.5 shows the size and performance of the implemented processor and the hardware module. The hardware module implementation has a total of 128 registers. It was not protected against SEEs because of the fact that the worst case scenario is an incorrect fault detection, which would not compromise the system. The implemented hardware module occupies 15 % of the total area of the miniMIPS microprocessor, while maintaining the same operating frequency. It is important to note that the hardware module has a fixed size, independent of the processor being used. That means that a bigger processor would lead to a smaller hardware module percentage, when compared to the processor. [Pg.60]

Emulation Analysis This technique determines the ability of established software programming to detect specific hardware and/or software faults purposely introduced into the microprocessing system. Usually, output results from the tested system are compared to those of a controlled, uninfected system to determine if all faults were properly detected. This method allows for the quantification of faults detected in microprocessor or program codes and provides a method for bit manipulation of software programming. [Pg.181]

The detection of a dangerous fault (by diagnostic tests, proof tests or by any other means) in any subsystem which can tolerate a single hardware fault shall result in either... [Pg.57]

For all subsystems (for example, sensors, final elements and non-PE logic solvers) except PE logic solvers the minimum hardware fault tolerance shall be as shown in Table 6 provided that the dominant failure mode is to the safe state or dangerous failures are detected (see 11.3), otherwise the fault tolerance shall be increased by one. [Pg.60]

In instrumentation and control, triple modular redundancy is very important for fail safe operation. Fig. 1/6.1.2-1 shows the same. Here, each of the three elements are voted thrice in each stage to get the output. In network communications, especially for remote communication, there are a few other problems known as Two Army problem, Byzantine general problem, etc. The issues discussed so far basically belong to fault masking to get away with hardware fault. There is another term called dynamic recovery, in which case there shall be a special mechanism to detect hardware fault and isolate the faulty hardware and replace the same with a good one. This wiU be clear from an example. Say in a process control, there are two processors one working and the other standby. If there is another processor whose main function is to act a diagnostic processor to check health of other processors, when it finds fault with... [Pg.60]

The software design includes the necessary self-supervision features to detect hardware faults that may occirr at the time of execution Software should also supervise its own control flow and data. These supervision features may not all have been anticipated in the software design reqirirements (see Section 8). In this case, a request for modification of these reqirirements should be made. [Pg.53]

For SIL 1 and SIL 2 systems there should be basic hardware fault checks (i.e., watchdog and serial communication error detection). [Pg.84]

An adaptive fault detection threshold hardware circuit design to reduce False Alarms... [Pg.867]

The remainder of this paper mainly focuses on the fault detection model of the electronic products containing instant faults (i.e. hard faults), and take the environmental changes and external stresses into consideration. The fault and false alarm model is introduced and a type of BIT circuit adopting an adaptive threshold is developed. The hardware implementations are discussed in this paper according to the designing requirements, and a printed circuit board is also made for test and verification. [Pg.867]


See other pages where Hardware fault detection is mentioned: [Pg.270]    [Pg.270]    [Pg.353]    [Pg.436]    [Pg.155]    [Pg.529]    [Pg.148]    [Pg.5]    [Pg.18]    [Pg.36]    [Pg.36]    [Pg.44]    [Pg.52]    [Pg.67]    [Pg.99]    [Pg.188]    [Pg.6]    [Pg.264]    [Pg.26]    [Pg.67]    [Pg.146]    [Pg.69]    [Pg.202]    [Pg.817]    [Pg.818]    [Pg.12]    [Pg.46]    [Pg.99]    [Pg.68]    [Pg.84]    [Pg.544]    [Pg.867]   
See also in sourсe #XX -- [ Pg.55 ]

See also in sourсe #XX -- [ Pg.55 ]




SEARCH



Hardware

© 2024 chempedia.info