Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Residual fault

Induced residual fault current through the phase CTs. [Pg.689]

Definition A.12 Residual) Fault indicator based on deviations between measurements and model equation based calculations. [Pg.242]

A worst case reliability bound theory that can be applied to both continuous and demand-based systems [Bishop 19SN6, Bishop 2002a]. In its simplest form, this method only requires an estimate of the number of residual faults (N) and the number of tests (7), The theory predicts that after T test demands under a given test profile, the expected value of probability of failine on demmid, E(PFD), will be bounded by ... [Pg.179]

Estimate the likelihood of residual faults in the PLC application logic... [Pg.180]

We first describe the industrial application used in this study, and then describe the approach used to estimate the number of residual faults and the probability of failure on demand for the logic example. This includes an analysis of the sensitivity of the estimate to changes in the operational profile. [Pg.180]

To estimate the probability of failure per demand, we first require an estimate for the number of residual faults (N). A theory was developed in [Bishop 2002b] that relates the code coverage achieved with the number of residual faults. This is based on the concept of executing a boverage element which is some part of the code structure, e.g. a program statement, a program block between decision points, a program branch, etc. The theory assumed that ... [Pg.181]

To apply this theory to the logic example, we need to identify a suitable coverage measure for the logic, and then derive the F parameter in order to predict the number of residual faults. An ideal coverage measure, would ... [Pg.181]

With ten binary outputs, only 2 ° output combinations are possible for the example logic, and in practice the constraints imposed by the logic network exclude the majority of output combinations. An analysis of the intended function of the logic suggested that only 12 of the output combinations should occur. This was likely to be a relatively coarse measure for estimating the number of residual faults. [Pg.182]

Output coverage exhibited a better ordering between coverage measure and detected faults with different test profiles, but the coverage hit 100% before all faults were revealed. We concluded that output coverage was too coarse for predicting residual faults with any accuracy. [Pg.186]

Having parameterised the model it is possible to convert a coverage measure into a residual fault estimate. Using the logic simulator we measured the coverage achieved using the customer tests that were applied to the Intermediate version of PLC logic implementation. The customer tests are summarised in Table 2. [Pg.186]

Assuming that the 6 faults found represent the estimate for residual faults is ... [Pg.187]

From an analysis of 2.2 10 customer tests we estimated that the number of residual faults was N=0.26. What does this imply for the future reliability of the logic software We compared the predictions of the black box Bayesian method of [Littlewood 1993] and the Worst case reliability model presented in [Bishop 2002a]. [Pg.191]

The worst case reliability function [Bishop 2002a] has similar in shape to the Bayesian reliability function where the probability of operating without failure R(dr) decreases almost inversely with the number of tests t. We have two ways of using the worst case theory —using the initial value of N-6 and 7=2.2 10 or the final fractional value of iV. 26, but claim no credit for prior testing (7=0). As mentioned earlier, when the number of residual faults N is fractional, the probability of survival is asymptotic to l-N for the case where AH).26, the asymptote is 74%. The reliability predictions are shown Figure 9. [Pg.191]

There is a non linear relationship between coverage and faults. A coverage growth model can be fitted to the observed data to estimate residual faults, but it is probably conservative to assume a linear relationship between coverage and faults found, and then devise a test strategy that maximises the coverage (like MCDC random testing). [Pg.192]

Coverage analysis theory can be used to estimate the number of residual faults in logic networks. [Pg.192]

Bishop 2002b] P.G. Bishop, Estimating Residual Faults from Code Coverage , Safecomp 2002, pp. 163-174,Catania, Italy, 10-13 Sep. 2002. [Pg.193]

It is considered that the concept of error-free software cannot currently be realised in any but the simplest of programs and hence measures to reduce the possible effects of software errors are taken. The design intent adopted is that no single software error is to invalidate more than one line of protective logic. Diverse software production methods between channels are specified in order to meet the intent. If achieved, this reduces the possibility of software induced CMF to the level of random, coincident residual faults. In a numerical sense the design intent is that software faults shall not compromise the assessed CMF reliability measures discussed in the previous section. Since quantitative demonstration of achievement cannot be made qualitative aspects of the methods of software production, discusaed briefly below, are assessed and lead to the judgement that dependent, software faults in both channels leading to a potentially unsafe condition can be discounted. [Pg.161]

This metric reflects the robustness of the item to single-point and residual faults either by coverage from safety mechanisms or by design (primarily safe faults). A high single-point fault metric implies that the proportion of singlepoint faults and residual faults in the hardware of the item is low. [Pg.149]

The failure rate assigned to residual faults can be determined using the diagnostic coverage of safety mechanisms that avoid single-point faults of the hardware element. The following equation gives a conservative estimation of the failure rate associated with the residual faults ... [Pg.150]

DCwith respect to residual faults Diagnostic Covcragc as a percentage... [Pg.150]

NOTE 3 If the above estimations are considered too conservative, then a detailed analysis of the failure modes of the hardware element can classify each failure mode into one of the fault classes (single-point faults, residual faults, latent, detected or perceived multiple-point faults or safe faults) wifli respect to the specified safety goal and determine the failure rates apportioned to the failure modes. Annex B describes a flow diagram that can be used to make the fault classification. [Pg.151]

ISO 26262 describes two alternative methods to assess the influence of failures in the design or realization in relation to the safety goals. The first method considers a quantitative evaluation of the probability that random hardware faults violate a specific safety goal. Alternatively it is assumed that in a safe design and its correct realization, about one hundred single-point or residual faults could be identified. [Pg.155]


See other pages where Residual fault is mentioned: [Pg.179]    [Pg.181]    [Pg.181]    [Pg.186]    [Pg.192]    [Pg.257]    [Pg.137]    [Pg.146]    [Pg.146]    [Pg.147]    [Pg.150]    [Pg.157]   
See also in sourсe #XX -- [ Pg.137 , Pg.149 , Pg.151 , Pg.157 ]




SEARCH



© 2024 chempedia.info