Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Fault tolerance validation

The hardware and software used to implement LIMS systems must be validated. Computers and networks need to be examined for potential impact of component failure on LIMS data. Security concerns regarding control of access to LIMS information must be addressed. Software, operating systems, and database management systems used in the implementation of LIMS systems must be validated to protect against data corruption and loss. Mechanisms for fault-tolerant operation and LIMS data backup and restoration should be documented and tested. One approach to validation of LIMS hardware and software is to choose vendors whose products are precertified however, the ultimate responsibility for validation remains with the user. Validating the LIMS system s operation involves a substantial amount of work, and an adequate validation infrastructure is a prerequisite for the construction of a dependable and flexible LIMS system. [Pg.518]

THE EVENT-TRIGGERED (ET) model of computation is presented as a generalization of the time-triggered (TT) approach. It supports hard real-time and flexible soft real-time services. The ET model is built upon a number of key notions temporal firewalls, controlled objects, temporarily valid state data, and unidirectional communications between isolated subsystems. It uses the producer/consumer rather than client/server model of interaction. In addition to describing a systems model and computation model, this article considers issues of schedu-labiUty and fault tolerance. The ET model is not radically different from the TT approach (as in many systems most events will originate from clocks) but it does provide a more appropriate architecture for open adaptive applications. [Pg.260]

When there are several ways to estimate a quantify, its fault tolerant level is the sum of the fault tolerant levels of these ways. Indeed, this set of relations forms a set of redimdant measures and the minimal number of needed faults that makes all of them unavailable is equal to the sum of the minimal number of faults that makes each of them unavailable. This sum is valid with the assumption that the different ways to obtain the quantity are independent. Consequently, we obtain ... [Pg.1325]

The applied software fault tolerance techniques will be verified. Let us assume, for example, the implementation of the rule 20.3 by MisraC 2004 standard for critical systems. This rule indicates that the validity of values passed to library functions shall be checked to avoid errors. The fault injector can introduce a negative value before a sqrt function call to test the introduced value checking process and the consequences on the system if this check fails. [Pg.1916]

Design diversity This approach is rather costly. It combines hardware and software fault tolerance in different sets of computing channels. Each channel is developed in different hardware and software in redundant mode to provide the same function. This method is deployed to identify deviation of a channel from the others. The goal is to tolerate both hardware and software design faults [7]. After developing a fault tolerant design it is necessary to validate it from a reliability point of view, discussed later. [Pg.820]

Fault tolerant design for reliability is one of the most difficult tasks to verify, evaluate, and validate. It is either time-consuming or very costly. This requires creating a number of models. Fault injection is an effective method to validate fault tolerant mechanisms. Also an amount of modeling is necessary for error/fault environment and structure and behavior of the design, etc. It is then necessary to determine how well the fault tolerant mechanisms work by analytic studies and fault simulations [7]. The results from these models after analyses shall include but not be limited to error rate, fault rate, latency, etc. Some of the better known tools are HARP—hybrid automated reliability predictor (Duke), SAVE—system availability estimator (IBM), and SHARPE—symbolic hierarchical automated reliability and performance evaluator (Duke). [Pg.820]

Once, working on a two-phase flow systan, we needed to make sure that it was two-fault tolerant to an accident—in other words, after two failures, the system was still safe. We put in two relief valves to handle overpressure. The problem was they both had to operate in tandem to handle the flow and pressure profiles. That was an important example of validating that the hazard control is adequate. Obviously, the two relief valves weren t independent in their operations and therefore didn t independently control the hazard. Luckily, we validated our control was NOT adequate during testing and resized the relief valves. [Pg.147]

Schneider, P., Easterbrook, S.M., Callahan, J.R., Holzmann, G.J. Validating requirements for fault tolerant systems using model checking. In ICRE, pp. 4-13. IEEE Computer Society (1998)... [Pg.206]

Traditionally, fault injection is used as a testing method for evaluation of fault tolerance in hardware or software. However, by using model-implemented fault injection (MUT) as implemented in the MODHT tool, fault injection can be used in early steps of software development. This is possible due to the increased utilization of model-based software development using tools like Simulink. Besides being used for validation of fault tolerance, MUT can be used to help developers focus on improvement of the most fault sensitive parts of a Simulink model. [Pg.229]

Aljer and Devienne [5] consider the use of a formal specification language as the foundation of real validation process. They propose architecture based upon stepwise refinement of a formal model to achieve controllable implementation. Partitioning, fault tolerance, and system management are seen as particular cases of refinement in order to conceptualize systems correct by proven construction. The methodology based on the refinement paradigm is described. To prove this approach, the B-HDL tool based on a combination of VHDL and B method formal language has been developed. [Pg.204]

Algorithm based fault tolerance [13,28] and self-checking software [36,5] use invariants contained in the executed program to check the validity of the generated results. This requires that appropriate invariants exist. These invariants have to be designed to provide a good failure detection capability and are not easy—if not impossible—to find for most applications. [Pg.285]

If a safe state is entered, usually the driver should be informed. This part of the fault reaction can be defined by user information requirements - Fault-ReactionUserlnformationRequirement ). For user information requirements, the fault tolerant time, a description of actions by the driver or other persons involved, and validation criteria for these actions can be added. For user information requirements, it is required to specify at least one safe state, and a description of actions by the driver or other persons involved (see Tab. 4, 2M06RA). [Pg.71]

The idea of inserting deliberated faults (or errors) in computer systems or computer components to evaluate its behavior in the presence of such faults, or to validate specific fault tolerance mechanisms, is quite intuitive and has been extensively used since the very beginning of the computer industry. There are many variants of this approach, which is generally known as fault injection. [Pg.367]

All systems need to be sufficiently reliable and secure in delivering the service that is required of them. Various ways in which this can be achieved in practice range from the use of various validation and verification techniques, to the use of software fault/intrusion tolerance techniques and continuous maintenance and patching once the product is released. Fault tolerance techniques range from simple wrappers of the software components [1] to the use of diverse software products in a fault-tolerant system [2]. Implementing fault tolerance with diversity was historically considered prohibitively expensive, due to the need for multiple bespoke software versions. However, the multitude of available off-the-shelf software for various applications has made the use of software diversity an affordable option for fault tolerance against either malicious or accidental faults. [Pg.94]

Kanawati, G., Kanawati, N., Abraham, J. FERRARI a tool for the validation of system dependability properties. In 22nd Int. Sym. on Fault-Tolerant Computing, FTCS-22... [Pg.276]

In addition to that there is also PV (product verification, product validation), which should confirm that the lifespan requirements are met also for the tolerances of the production of supplied components. Design-FMEA formally questioned which error sequences occur if a characteristic is deviate from the specified range. How such faults propagate into the upper levels up-to a possible violation of safety goals can be assessed from the analyses and the architectures of the higher levels. [Pg.192]


See other pages where Fault tolerance validation is mentioned: [Pg.262]    [Pg.41]    [Pg.289]    [Pg.1900]    [Pg.264]    [Pg.194]    [Pg.5]    [Pg.820]    [Pg.16]    [Pg.379]    [Pg.238]    [Pg.18]    [Pg.1]    [Pg.1]    [Pg.71]    [Pg.71]    [Pg.27]    [Pg.56]    [Pg.248]    [Pg.36]    [Pg.104]   
See also in sourсe #XX -- [ Pg.820 ]




SEARCH



Fault tolerance

Fault tolerant

Tolerances validation

© 2024 chempedia.info