Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistics of Failure

In the rest of this chapter, we will discuss briefly the theoretical ideas and the models employed for the study of failure of disordered solids, and other dynamical systems. In particular, we give a very brief summary of the percolation theory and the models (both lattice and continuum). The various lattice statistical exponents and the (fractal) dimensions are introduced here. We then give brief introduction to the concept of stress concentration around a sharp edge of a void or impurity cluster in a stressed solid. The concept is then extended to derive the extreme statistics of failure of randomly disordered solids. Here, we also discuss the competition between the percolation and the extreme statistics in determining the breakdown statistics of disordered solids. Finally, we discuss the self-organised criticality and some models showing such critical behaviour. [Pg.4]

Figure 6. Statistics of failure gradient distribution (143 from 16 projects). Figure 6. Statistics of failure gradient distribution (143 from 16 projects).
Statistical analysis of failures of equipment show a characteristic trend with time, often described as the bath tub curve ... [Pg.286]

Many distribution functions can be apphed to strength data of ceramics but the function that has been most widely apphed is the WeibuU function, which is based on the concept of failure at the weakest link in a body under simple tension. A normal distribution is inappropriate for ceramic strengths because extreme values of the flaw distribution, not the central tendency of the flaw distribution, determine the strength. One implication of WeibuU statistics is that large bodies are weaker than small bodies because the number of flaws a body contains is proportional to its volume. [Pg.319]

There are a variety of ways to express absolute QRA results. Absolute frequency results are estimates of the statistical likelihood of an accident occurring. Table 3 contains examples of typical statements of absolute frequency estimates. These estimates for complex system failures are usually synthesized using basic equipment failure and operator error data. Depending upon the availability, specificity, and quality of failure data, the estimates may have considerable statistical uncertainty (e.g., factors of 10 or more because of uncertainties in the input data alone). When reporting single-point estimates or best estimates of the expected frequency of rare events (i.e., events not expected to occur within the operating life of a plant), analysts sometimes provide a measure of the sensitivity of the results arising from data uncertainties. [Pg.14]

The calculated loading stress, L, on a component is not only a function of applied load, but also the stress analysis technique used to find the stress, the geometry, and the failure theory used (Ullman, 1992). Using the variance equation, the parameters for the dimensional variation estimates and the applied load distribution, a statistical failure theory can then be formulated to determine the stress distribution, f L). This is then used in the SSI analysis to determine the probability of failure together with material strength distribution f S). [Pg.191]

Corley, J. E., Troubleshooting Turbomachinery Problems Using a Statistical Analysis of Failure Data, Proceedings of the 19th Turbomachinery Symposium, Texas A M University, College Station, TX, 1990, pp, 149-158. [Pg.490]

In the introduction to this section, two differences between "classical" and Bayes statistics were mentioned. One of these was the Bayes treatment of failure rate and demand probttbility as random variables. This subsection provides a simple illustration of a Bayes treatment for calculating the confidence interval for demand probability. The direct approach taken here uses the binomial distribution (equation 2.4-7) for the probability density function (pdf). If p is the probability of failure on demand, then the confidence nr that p is less than p is given by equation 2.6-30. [Pg.55]

Failure rates are computed by dividing the total number of failures for the equipment population under study by the equipment s total exposure hours (for time-related rates) or by the total demands upon the equipment (for demand-related rates). In plant operations, there are a large number of unmeasured and varying influences on both numerator and denominator throughout the study period or during data processing. Accordingly, a statistical approach is necessary to develop failure rates that represent the true values. [Pg.11]

Three reports have been issued containing IPRDS failure data. Information on pumps, valves, and major components in NPP electrical distribution systems has been encoded and analyzed. All three reports provide introductions to the IPRDS, explain failure data collections, discuss the type of failure data in the data base, and summarize the findings. They all contain comprehensive breakdowns of failure rates by failure modes with the results compared with WASH-1400 and the corresponding LER summaries. Statistical tables and plant-specific data are found in the appendixes. Because the data base was developed from only four nuclear power stations, caution should be used for other than generic application. [Pg.78]

Statistical Methods for Nonelectronic Reliability, Reliability Specifications, Special Application Methods for Reliability Prediction Part Failure Characteristics, and Reliability Demonstration Tests. Data is located in section 5.0 on Part Failure Characteristics. This section describes the results of the statistical analyses of failure data from more than 250 distinct nonelectronic parts collected from recent commercial and military projects. This data was collected in-house (from operations and maintenance reports) and from industry wide sources. Tables, alphabetized by part class/ part type, are presented for easy reference to part failure rates assuminng that the part lives are exponentially distributed (as in previous editions of this notebook, the majority of data available included total operating time, and total number of failures only). For parts for which the actual life times for each part under test were included in the database, further tables are presented which describe the results of testing the fit of the exponential and Weibull distributions. [Pg.87]

The limit of detection (LoD) has already been mentioned in Section 4.3.1. This is the minimum concentration of analyte that can be detected with statistical confidence, based on the concept of an adequately low risk of failure to detect a determinand. Only one value is indicated in Figure 4.9 but there are many ways of estimating the value of the LoD and the choice depends on how well the level needs to be defined. It is determined by repeat analysis of a blank test portion or a test portion containing a very small amount of analyte. A measured signal of three times the standard deviation of the blank signal (3sbi) is unlikely to happen by chance and is commonly taken as an approximate estimation of the LoD. This approach is usually adequate if all of the analytical results are well above this value. The value of Sbi used should be the standard deviation of the results obtained from a large number of batches of blank or low-level spike solutions. In addition, the approximation only applies to results that are normally distributed and are quoted with a level of confidence of 95%. [Pg.87]

The comparison of the safety of equipment is not straightforward. It depends on several features of both process and equipment themselves. It can be evaluated from quantitative accident and failure data and from engineering practice and recommendations. Experience has been used for layout recommendations and for the development of safety analysis methods such as the Dow E F Index (Dow, 1987). Statistics contain details, causes and rates of failures of equipment and data on equipment involved in large losses. [Pg.55]

There is also a certain amount of statistical information available on the failures of process system components. Arulanantham and Lees (1981) have studied pressure vessel and fired heater failures in process plants such as olefins plants. They define failure as a condition in which a crack, leak or other defect has developed in the equipment to the extent that repair or replacement is required, a definition which includes some of the potentially dangerous as well as all catastrophic failures. The failure rates of equipment are related to some extent to the safety of process items. If a piece of equipment has a long history of failures, it may cause safety problems in the future. Therefore it would be better to consider another equipment instead. It should be remembered that all reliability or failure information does not express safety directly, since all failures are not dangerous and not all accidents are due to failures of equipment. [Pg.56]

The aims of the automation group at LGC were very clear and are shown in Table 1.3. Simphcity was considered to be the best approach, with the minimum number of processes being involved. A more complex approach has many more chances of failure. The total systems approach is defined in Chapter 3. Essentially, it sets out to cover all aspects of the analytical process as defined in Fig. 1.2. It provides for the quahty checks at operator, supervisor and managerial levels, and rehable results transferred in a readily digestible format. The Tar and Nicotine Survey described by Stockwell and Copeland [IS] is a good example of the approach. The total process, from the statistical sampHng pattern through to quahty-controlled data, leads in its final format to results tabulated for public information. [Pg.259]

An important consideration in all failure studies is the influence of material variability. Statistical distributions of failure incidence must be known and properly accounted for if reliability limits are to be set. Wiegand and co-workers (14, 113) have discussed propellant sample and batch variability, and its effect on failure behavior, in numerous reports. These studies point out the statistical nature of failure and the fact that knowledge of the distributions is required to set conservative design values for motor stress and strain capability. Statistical distributions permit the prediction of the probability of failure, but mission considerations dictate the allowable failure frequencies. [Pg.228]

Bills (7) has applied an adaptation of this law to solid propellants and propellant-liner bonds for discrete, constantly imposed stress levels considering U to be the time at the ith stress level and tfi the mean time to failure at the ith stress level. A probability distribution function P was included to account for the statistical distribution of failures. For cyclic stress tests the time is the number of cycles divided by the frequency, and the ith loading is the amplitude. The empirical relationship... [Pg.236]

To demonstrate, statistically, low failure rates on EED s requires that enormously large number of destructive tests be performed. And separate tests, in large numbers, are required for each new type of EED appearing on the market. It is evident that the destructive method is expensive and time con-sumin g and for this reason, the need for alternative or, at least complimentary techniques, became recognized... [Pg.709]

Between 1985 and 1991,1726 natural gas pipeline ruptures andleakages were reported in the United States. These incidents resulted in 634 injuries and 131 fatalities. Third-party damage was the most common cause of these incidents, followed by corrosion. The GAO believes that the corrosion-related incidents can be reduced with the use of smart pigs (46). U.S. DOT 1992 accident statistics showed that 52.5% of U.S. oil spills involving loss of at least 1590 m3 came from pipeline accidents, comparable to the worldwide statistic of 51.5%. The U.S. DOT regulated 344,575 km of liquids pipelines during the 10-yr study period and received reports on 1901 accidents during that time thus the number of failures per year per 1000 miles was 0.888, of which 27% was due to corrosion and 31% to outside forces (48). [Pg.51]

The statistics of expenditure and NDA approvals can mask a major source of R D cost and frustration in the industry late-stage development and postmarketing failures. These types of failures attract significant unwanted publicity and only occur after hundreds of millions of dollars have been spent. Well-publicized examples have included the recent late-stage failure of torcetrapib (Tall et al., 2007) and the postmarketing withdrawals of fenfluramine-phentermine (Fen-Phen) and Vioxx (Embi et al., 2006). [Pg.4]


See other pages where Statistics of Failure is mentioned: [Pg.105]    [Pg.147]    [Pg.26]    [Pg.213]    [Pg.105]    [Pg.147]    [Pg.26]    [Pg.213]    [Pg.51]    [Pg.405]    [Pg.319]    [Pg.577]    [Pg.266]    [Pg.297]    [Pg.192]    [Pg.307]    [Pg.489]    [Pg.577]    [Pg.25]    [Pg.133]    [Pg.140]    [Pg.35]    [Pg.11]    [Pg.72]    [Pg.593]    [Pg.660]    [Pg.431]    [Pg.219]    [Pg.143]    [Pg.646]    [Pg.218]    [Pg.375]    [Pg.69]   


SEARCH



Failure statistics

Statistics of composite failure

© 2024 chempedia.info