Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Errors fault simulation

Fault tolerant design for reliability is one of the most difficult tasks to verify, evaluate, and validate. It is either time-consuming or very costly. This requires creating a number of models. Fault injection is an effective method to validate fault tolerant mechanisms. Also an amount of modeling is necessary for error/fault environment and structure and behavior of the design, etc. It is then necessary to determine how well the fault tolerant mechanisms work by analytic studies and fault simulations [7]. The results from these models after analyses shall include but not be limited to error rate, fault rate, latency, etc. Some of the better known tools are HARP—hybrid automated reliability predictor (Duke), SAVE—system availability estimator (IBM), and SHARPE—symbolic hierarchical automated reliability and performance evaluator (Duke). [Pg.820]

Intended Use The intended use of the model sets the sophistication required. Relational models are adequate for control within narrow bands of setpoints. Physical models are reqiiired for fault detection and design. Even when relational models are used, they are frequently developed bv repeated simulations using physical models. Further, artificial neural-network models used in analysis of plant performance including gross error detection are in their infancy. Readers are referred to the work of Himmelblau for these developments. [For example, see Terry and Himmelblau (1993) cited in the reference list.] Process simulators are in wide use and readily available to engineers. Consequently, the emphasis of this section is to develop a pre-liminaiy physical model representing the unit. [Pg.2555]

The behavior of the detection algorithm is illustrated by adding a bias to some of the measurements. Curves A, B, C, and D of Fig. 3 illustrate the absolute values of the innovation sequences, showing the simulated error at different times and for different measurements. These errors can be easily recognized in curve E when the chi-square test is applied to the whole innovation vector (n = 4 and a = 0.01). Finally, curves F,G,H, and I display the ratio between the critical value of the test statistic, r, and the chi-value that arises from the source when the variance of the ith innovation (suspected to be at fault) has been substantially increased. This ratio, which is approximately equal to 1 under no-fault conditions, rises sharply when the discarded innovation is the one at fault. [Pg.166]

Another way of using simulation facilities is for modelling purposes the effects of time-stress on fault diagnosis for instance could be modelled in this way, and frequent errors and recoveries could then be used to arrive at suggestions for decision support and interface design. [Pg.29]

Table 5.3 Number of faults injected by simulation fault injection in miniMIPS protected by OCFCM and the percentage of detected errors 67... Table 5.3 Number of faults injected by simulation fault injection in miniMIPS protected by OCFCM and the percentage of detected errors 67...
Soft errors can be detected and corrected by the system s logic, meaning that it does not require a hard reset to recover from an error. Sections 7.1.2 and 7.2.2 present neutron irradiation experiments simulating the effect of SEE in Flash-base and SRAM-based FPGAs, while Chaps. 5 and 6 present fault injection simulation experiments simulating SEEs at RTL level and in the configuration memoiy bitstream, respectively. In this work, SEUs and SETs will be used to describe transient faults that the proposed techniques can cope with. [Pg.24]

The experiment eontinuously compared the PC of a golden miniMIPS with the PC of the faulty one. Fault injection results are presented in Table 5.3. It shows the number of iqected faults (Faults Injected) for each application, the number of faults that caused an error to the microprocessor (Incorrect Result) and the detection rate achieved by the proposed solution (Errors Detected). The system was simulated with a clock period of 42 ns and a total of 2459 signals describing it. Forty thousand faults represent 16 times the number of signals, but only 0.4 % of the extensive pos-sibihties of faults for the encryption algorithm... [Pg.81]

Another problem is that the transmission line is modeled in time domain, so some important frequency-dependent parameters can t be exactly represented. These parameters can only be approximated and idealized in order to simplify the simulation process. These approximations lead to critical errors due to divergence of the parameter extraction. Consequently, the measurement system is not efficient and don t realize a sufficient accuracy. Furthermore, the impulse response is derived from the scattering parameter SI 1, which is measured in the frequency domain and transformed to the time-domain. This is critical for the resolution and computational time. In [23] only the wire faults with open circuits and some special impedance changes are estimated. [Pg.4]

A cost function to be minimised in an iterative parameter estimation procedure may be formulated by using either differences between outputs from a real system and computed outputs from a model or by means of ARR residuals. As output errors, as well as ARR residuals are generally nonlinear functions of the component parameters, multiple fault parameter isolation becomes a well-known nonlinear least squares problem. For real-time FDI, ARR residuals obtained from a DBG have the advantage that they make the parameter estimation independent of any initial conditions of the process that are hardly known and will have to be estimated along with component parameters. In off-line simulation, the real system may be replaced by a behavioural model. Measured data is then generated by assuming realistic consistent initial conditions and by solving the equations of the behavioural model. [Pg.147]

The simulation results for Vb blockage (faulty Rv, are plotted in Fig. 7.31. It shows that the state estimation errors for both the states from the first UIO is almost zero whereas those from other UIOs (one or both the state errors) deviate from zero after 5 s. This isolates Vi, fault. [Pg.262]

In the case considered in Fig. 9, the fault is injected at the beginning of simulation and SSE=2.79. As a result the bottom product response almost does not change. However, the top product composition is disturbed - it is shifted towards bigger values. Moreover, the steady-state control error is nonzero. Such a situation should be detected but it is not critical. It is possible to continue process operation till the controller is fixed. [Pg.122]

From technical papers, there are several other types of FTA automation are also found. Probabilistic fault tree (PROFAT) is one approach where simulation algorithm ASH has been utilized. For the complex cases where there is unavailability of reliability data for the number of specific equipment, generic probabilistic data are used. Also, probabilistic data for human error are used. However, fuzzy approach in this regard is also found (especially for human error). Instead of specific values for human error hybrid approach with fuzzy logic is quite effective. In fuzzy logic, set failure rates are defined in linguistic way, which is more realistic. For human robot, offshore applications FTA with fuzzy approach produce better result in automating the process. [Pg.346]

The objective of fault injection is to introduce artificial fault or errors in a system to test the system in presence of errors. The injection of faults using MODIFI is done by rerouting the connection between blocks in the model to also include a fault model. This is illustrated in Fig. 1 and Fig. 2, which show a Simulink model before and after MODIFI has inserted a fault injection block. The fault injection block will pass the input value to the output port unmodified unless a trigger is enabled. The trigger, which is based on the simulation time, will cause the block to apply the fault model, a bit-flip in this case, to the output. Fig. 2 also shows that MODIFI turns signals in the model into Simulink test points [10] which all have logging enabled. [Pg.220]

The cause was a software fault in equipment which was unchanged from Ariane 4, but which was unsuitable for the changed flight trajectory of Ariane 5. There was inadequate analysis and simulation of the systems in Ariane 5. The dual-redundant IRSs used identical hardware and software, with one active and one on hot standby. Because the flight trajectory was different from Ariane 4, the active IRS declared a failure due to a software exception (i.e., an error message). The standby IRS then failed for the same reason. [Pg.31]

Such a simulator allows 30 or 40 faults to be simulated every day. The great number of manipulations is a cause of errors. The intermittent faults are not simulated. [Pg.215]

The Technique for Human Error Rate Prediction (THERP) was developed by Swain and Guttman (1983) to evaluate the probability of human error within specific tasks. THERP uses a fault tree approach to model Human Error Probabilities (HEP), but also attempts to account for other factors in the environment that may influence these probabilities. These factors are referred to as Performance Shaping Eactors (PSE). The probabilities used in THERP can either be generated by the analyst, usually from simulator data, or can be taken from tables generated by Swain and Guttman from available data and expert judgement. [Pg.1095]

Regarding to fault models and their simulation, the Model-Based Generation of Test-Cases for Embedded Systems (MOGENTES) project [20] specifies a number of HW and SW related fault and failure models and taxonomies. On the other hand, the international ASAM AE HIL [26] standard defines an interface to perform error simulation in Hardware in the Loop testing. [Pg.3]

Regarding the estimation of the speed, the experiments made by means our fault injection framework show the robustness of the algorithm. In this case we also get the maximum error in the 8 campaign, where at instant 151.75s we find a disagreement of 1.350m/s respect to the non-faulty simulation (60.23m/s, 2.24%). ... [Pg.11]


See other pages where Errors fault simulation is mentioned: [Pg.358]    [Pg.266]    [Pg.80]    [Pg.247]    [Pg.344]    [Pg.1028]    [Pg.1869]    [Pg.262]    [Pg.117]    [Pg.224]    [Pg.124]    [Pg.252]    [Pg.11]    [Pg.17]    [Pg.134]    [Pg.137]    [Pg.154]    [Pg.184]    [Pg.79]   
See also in sourсe #XX -- [ Pg.14 ]




SEARCH



Fault simulation

© 2024 chempedia.info