Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random failure

The SIF hardware is often manufactured with electrical, electronic, programmable electronic, and mechanical components. Each component wears out or breaks down after a different length of time, depending on how well it was originally manufactured, how much it has been used, the variation in the operating conditions, etc. Since these components are lumped together to make a device, the failures of the device appear to be random, even though the failure distributions of the individual components may not be random. [Pg.135]

If it can be demonstrated that an SIF device (e.g., a block valve) has dominant time-based failure mechanisms (i.e., they wear out), the random failure rate model can lead to erroneous conclusions and practices. For example, in calculating test intervals, a random model may lead to testing more frequently than actually required during the early life of the device and testing too infrequently during the later wear-out phase. Owners/operators should be aware that reliability models (e.g., Weibull) are available that divide failures into infant mortality, random, and wear-out modes. This guideline assumes failures are random. [Pg.135]

One very effective barrier against random device failures is to implement redundancy. Fault tolerance is provided using multiple devices in voting configurations that are appropriate for the SIL. If one device breaks down, another device is available to provide the safety action. Since failures occur randomly, it is less likely that multiple devices fail at the same time. [Pg.135]

By observing the operation of a device over time, data can be collected about how often it breaks down. This information can be used to estimate how long a device is likely to last before it stops working properly. However, in the case of PE devices and logic solvers, the technology is evolving so rapidly that the reliability data collected on any device is often limited unless databases are pooled. [Pg.135]


Early failures may occur almost immediately, and the failure rate is determined by manufacturing faults or poor repairs. Random failures are due to mechanical or human failure, while wear failure occurs mainly due to mechanical faults as the equipment becomes old. One of the techniques used by maintenance engineers is to record the mean time to failure (MTF) of equipment items to find out in which period a piece of equipment is likely to fail. This provides some of the information required to determine an appropriate maintenance strategy tor each equipment item. [Pg.287]

Variations in a produet s material properties, serviee loads, environment and use typieally lead to random failures over the most protraeted period of the produet s expeeted life-eyele. During the eonditions of use, environmental and serviee variations give rise to temporary overloads or transients eausing failures, although some failures are also eaused by human related events sueh as installation and operation errors rather than by any intrinsie property of the produet s eomponents (Klit et al., 1993). Variability, therefore, is also the souree of unreliability in a produet (Carter, 1997). However, it is evident that if produet reliability is determined during the design proeess, subsequent manufaeturing, assembly and delivery of the system will eertainly not improve upon this inherent reliability level (Kapur and Lamberson, 1977). [Pg.21]

System models assume the independent probabilities of basic event failures. Violators oithis assumed independence are called Systems Interactions, Dependencies, Common Modes, or Common Cause Failure (CCF) which is used here. CCF may cause deterministic, possibly delayed, failures of equipment, an increase in the random failure probability of affected equipment. The CCF may immediately affect redundant equipment with devastating effect because no lime is available for mitigation. If the effect of CCF is a delayed increase in the random failure probability and known, time is available for mitigation. [Pg.123]

These six major steps require consideration of a) the occurrence frequency of fires, b) the physical effects of fires, and c) the response of the plant. Crucial is the plant response as it is affected by components damaged by fire, and by components unavailable for other re isons u-.e., random failures, maintenance, and fire-fighting activities). [Pg.196]

The fault tree (Figure 7.4-1) has "Pre.ssure Tank Rupture" as the top event (gate G1). This may result from random failure of the tank under load (BEl), OR the gate G2, "Tank ruptures due to overpressure" which is made of BE6 "Relief valve fails to open" AND G3, "Pump motor operates too long." This is made of BE2, "Timer contacts fail to open," AND G4, "Negative feedback loop inactive" which is composed of BE3, "Pressure gauge stuck," OR BE4, "Operator fails to open switch," OR "BE5, "Switch fails to open,"... [Pg.304]

Answer It is unrealistic. No components are identical, but even if they were the causes of random failure in component are not correlated to the other component because they are defined to be random. However, components can fail at the same time from deterministic coupling such as fire, missile, common utilities etc. [Pg.498]

Answer You can treat them as one component in the component column and prepare the FMECA for ways they can fail together. For random failure of both valves in a mission time, estimate the failure probability as one-half the failure rate of one times the probability of ailing in the mission time of the other. [Pg.498]

There are defect limits that are associated with random failure modes. For example, if there is a leak from a mechanical seal on a pump, where do we decide that the leakage is excessive and requires immediate maintenance Vibration analysis severity levels are also typical examples of when do we have severe enough conditions to warrant equipment shutdown and overhaul. In such circumstances, the defect limit is dependent upon individual subjective judgment. [Pg.1043]

The first limitation is set by the nature of random failure events. If we believe that failures occur periodically and not with equal probability in time, the premise of PM or periodic machinery maintenance is incorrect. Therefore, the only alternative is continuous monitoring. [Pg.1044]

In terms of statistical methods, our NPPI Tool Library has several statistical analysis tools to capture innovation opportunities at processes that are likely to drift and become out of control or processes that execute with random failure. [Pg.184]

Direct evidence for the role of inflammatory processes in the progression of AD has been provided by the study of IL-6 mRNA expression in the hippocampus. Thus a positive correlation has been found between the progression of the symptoms from the moderate to the severe form of the disease and the increase in IL-6 mRNA expression. From these observations it can be concluded that the increase in inflammatory mediators in the brain plays an important, but not the only, part in the development and progression of AD. Thus the deposition of Ab plaques in the cortex, an essential feature of the disease, may precede the changes in neurotransmitter function and the increase in inflammatory mediators. Whichever of these changes is eventually found to be of primary importance, it is now evident that AD is not due to a random failure of neuronal pathways but is the consequence of a systematic and progressive degeneration of different neuronal systems within the brain. [Pg.359]

The process we wish to consider is one in which a primary chemical species is modified by reaction with a number of secondary species Aat,. . . Ai in succession. These secondary species have to be prepared specially for the appropriate stage of the process and must be available at the right time otherwise the whole process is rendered valueless. Such unstable preparations are not uncommon in biochemical engineering. If the preparation of A , n = 1,. .. A, is subject to random failure, the preparation of more than one batch of it will increase the probability of its being available at the right time. However, this must be carefully balanced against the increased cost of these extra preparations and a problem of optimal specification arises. The system is illustrated in Fig. 8.1 r denotes the number of batches of A that are prepared. [Pg.160]

Rudd, D. F. 1960. A study of iterative optimization and on the design of processes subject to random failures. Ph.D. Thesis. Department of Chemical Engineering, University of Minnesota, Minneapolis, Minnesota. [Pg.187]

Method validations and drug substance or finished pharmaceutical product specifications are intimately linked. To ensure transferability of the method and to ensure the method will operate successfully in a QC site, the method variation (from the intermediate precision) should be known and monitored. The method variation is an estimate of the variation that will be experienced in routine use of the method. More method variation will create unacceptable random failure rates, and provide no room for reasonable product variation or even minor stability changes. The method variation should be less than one-third of the interval from the mean or target (typically the midpoint of the upper and lower specification limit) value to the nearest specification limit, or one-sixth... [Pg.93]

I know. But we re certain that six of the lorries zapped so far contained a shift member. Their processors and ancillary circuits were suffering random failures. It matched the kind of interference which Adkinson s plane suffered."... [Pg.64]

There are no known or accepted rigorous or scientific ways to obtain probabilistic or even subjective likelihood information using historical data or analysis in the case of non-random failures and system design errors, including unsafe software behavior. When forced to come up with such evaluations, engineering judgment is usually used, which in most cases amounts to pulling numbers out of the air, often influenced by political and other nontechnical factors. Selection of a system architecture and early architectural trade evaluations on such a basis is questionable and perhaps one reason why risk usually does not play a primary role in the early architectural trade process. [Pg.320]

Generally, it is determined as the line fitted to average values of the monitoring results in the measurement points. The evaluated transfer function, characteristics or quantity are marked with the fact that the influence of random failures has been removed. However, they include systematic errors due to the product features. [Pg.102]

Compared with identical separation, which helps against random failures, diverse separation offers the additional benefit of reducing the probability of systematic faults and of reducing common cause failures. [Pg.36]

Two fundamentally different categories of failures exist physical failures (often called random failures) and functional failures (often called systematic failures). (See Figure 3-1). Random failures are relatively well understood. A random failure is almost always permanent and attributable to some component or module. For example, a system that consists of a programmable electronic controller module fails. The controller output de-energizes and no longer supplies current to a solenoid valve. The controller diagnostics identify a bad output transistor component. [Pg.28]

It should be noted that lightning is considered a random event. Many failure classification schemes use the term random failure because stress events are generally random. Some failure classification schemes use the term physical failure for the same thing. [Pg.28]

If an engineer programming a safety function entered an incorrect logic block such that a safety instrumented function would not perform its protective function, that failure would also be considered a systematic failure. Again, the hardware is fully capable of executing the programmed logic, no random failure has occurred, but the safety instrumented function would not work. [Pg.29]

Current functional safety standards, lEC 61508 and ANSl/lSA-84.00.01-2004 (lEC 61511 Mod), (Ref. 1 and 2) state that probabilistic evaluation using failure rate data be done only for random failures. To reduce the chance of systematic failures, the standards include a series of "design rules" in the form of specific requirements. These requirements state that the safety instrumented system designer must check a wide range of things in order to detect and ehcninate systematic failures. [Pg.29]

A software bug causes a logic solver to fail in an unpredictable and apparently random manner. Will this failure be considered a random failure or systematic failure ... [Pg.39]

Formulas for MTTF are derived and often used for products during the useful life period. This method excludes wearout failures. This often results in a situation in which the MTTF is 300 years and useful life is only 40 years. Note that instruments should be replaced before the end of their useful life. When this is done, the mean time to random failures will be similar to the number predicted. [Pg.46]

Even with approximate data, the methods began to show how designers could achieve higher levels of safety while optimizing costs. The safety verification calculations required by the new functional safety standards have shown designers how to design much more balanced designs. The calculations have shown many how to do a better job. But, failure rate and failure mode data for random failures on the chosen equipment is required. [Pg.117]

The concept of random failures versus systematic failures was presented in Chapter 3. One must understand the differences in order to understand failure rate data. For safety instrumented function verification... [Pg.117]

Lack of distinction between random failures and wear out failures,... [Pg.118]

When total time in operation is not recorded, failures due to wear out cannot be distinguished from random failures during the useful life. If these failures are grouped together, the data analyst cannot distinguish between them and will typically assume that all failures are random. The resulting failure rate number is too high. In addition, the opportunity to establish the useful life period is also lost. [Pg.119]

In safety instrumented function verification calculations, the task is to calculate the probability of failure on demand due to random failures. This is done assuming that a preventative maintenance program has been established per tide requirements of lEC 61508 (Ref. 3) to replace instruments before the end of their useful life. [Pg.119]

When details about failure cause are not collected, failures due to maintenance errors, calibration errors and other systematic faults cannot be distinguished from random failures. The result is a number that can be high. [Pg.119]

In the opinion of committee members on functional safety standards, some of the above factors cannot be practically quantified, e.g., systematic faults like software bugs or procedural errors. Hence functional safety standards provide requirements for protection against systematic faults as well as requirements to do probabilistic calculations to protect against random failures. For the typical SIF solutions being reviewed in this chapter the results of probabilistic SIL verification calculations, including architecture limitations per lEC 61508 (Ref. 1), will be used to demonstrate whether the design satisfies the SIL requirements. [Pg.174]

In many ways modeling the repair process is difficult because the repair process is quite different from the failure process. Random failures are due to a stochastic process and most of our modeling techniques were created for these stochastic processes. Certain aspects of the repair process are deterministic. Other aspects of the repair process are stochastic. Fortunately, we can approximate the repair process more accurately with Markov models than most other techniques. [Pg.357]

Field data may include both systematic and random failures... [Pg.373]


See other pages where Random failure is mentioned: [Pg.31]    [Pg.131]    [Pg.326]    [Pg.230]    [Pg.440]    [Pg.210]    [Pg.959]    [Pg.34]    [Pg.28]    [Pg.29]    [Pg.117]    [Pg.118]    [Pg.121]    [Pg.159]   
See also in sourсe #XX -- [ Pg.28 , Pg.117 ]

See also in sourсe #XX -- [ Pg.193 ]

See also in sourсe #XX -- [ Pg.80 , Pg.152 , Pg.331 ]




SEARCH



© 2024 chempedia.info