Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Failure time, statistical approach

In addition, the pipe lifetime in brittle fracture for various hoop stress can be predicted, when the maximum size of inelusion is controllable. A teehnique of the lifetime prediction for variable hoop stress with controllable inclusion size (in this ease, the size of inclusion is selected as 74.5 aw ) is illustrated in Figure 8. As reported by many authors, the failure time in brittle fracture exhibits a large scatter. It is mainly associated with the random size and location of the inelusions. This fact is also well known for other materials, such as high strength steel [10] and smface hardened steel [11]. For better prediction of the scatter of failure time, statistical approach is needed. Thus, the lower and upper bounds of failure time for plied hoop stress (see Figure 8) take into account the uncertainty of pipe hfetime. [Pg.2443]

Failure rates are computed by dividing the total number of failures for the equipment population under study by the equipment s total exposure hours (for time-related rates) or by the total demands upon the equipment (for demand-related rates). In plant operations, there are a large number of unmeasured and varying influences on both numerator and denominator throughout the study period or during data processing. Accordingly, a statistical approach is necessary to develop failure rates that represent the true values. [Pg.11]

A more common method for medical devices is to run the life test until failure occurs. Then an exponential model can be used to calculate the percentage survivability. Using a chi-square distribution, limits of confidence on this calculation can be established. These calculations assume that a failure is equally likely to occur at any time. If this assumption is unreasonable (e.g., if there are a number of early failures), it may be necessary to use a Weibull model to calculate the mean time to failure. This statistical model requires the determination of two parameters and is much more difficult to apply to a test that some devices survived. In the heart-valve industry, lifetime prediction based on S-N (stress versus number of cycles) or damage-tolerant approaches is required. These methods require fatigue testing and ability to predict crack growth. " ... [Pg.336]

The limit of detection (LoD) has already been mentioned in Section 4.3.1. This is the minimum concentration of analyte that can be detected with statistical confidence, based on the concept of an adequately low risk of failure to detect a determinand. Only one value is indicated in Figure 4.9 but there are many ways of estimating the value of the LoD and the choice depends on how well the level needs to be defined. It is determined by repeat analysis of a blank test portion or a test portion containing a very small amount of analyte. A measured signal of three times the standard deviation of the blank signal (3sbi) is unlikely to happen by chance and is commonly taken as an approximate estimation of the LoD. This approach is usually adequate if all of the analytical results are well above this value. The value of Sbi used should be the standard deviation of the results obtained from a large number of batches of blank or low-level spike solutions. In addition, the approximation only applies to results that are normally distributed and are quoted with a level of confidence of 95%. [Pg.87]

Bringing processes into or near a state of statistical control will improve processes by making them less variable, centered closer to target, and allow the manufacturer to make a product that will more consistently meet product specifications. This benefits both the manufacturer and the consumers who use their products. The use of SPC methods to evaluate and to improve processes not only can be applied to product characteristics such as tablet weight and tablet hardness, but also to product performance measures such as consumer complaints, line down time, and industrial safety measurements. An SPC approach to process improvement can also lead to reductions in fill overages, reductions in waste, as well as reductions in batch failures. By eliminating special cause variability, it becomes easier to monitor a process to ensure that new special causes do not find their way into the process. [Pg.3508]

It is also possible to carry out a probabilistic analysis for subsea pipelines in relation to the requirements of fitness for service or life extension. This approach is supported by Hopkins et al. (2001) where the uncertainty values in relation to the inspection data, material strength, and so forth are accounted for The statistics of the input parameters and the engineering models then determine, by probabilistic analysis, the failure probability for each failure mode or mechanism and the variation of the failure probability over time. Once the uncertainty of each input value has been described statistically, a Monte Carlo simulation can be used to predict the growth rate of known defects over time. [Pg.10]

It is assumed in preventive maintenance that preventive repair or replacement of items would be appropriately timed if they were to occur just prior to their failure. In most preventive maintenance approaches, the estimation of the useful life of an item is replaced by the estimation of its reliability at any given point during its life, i.e., the ability of the item to perform its required function for a given time interval. In order to do so, statistical distributions are used to describe the stochastic failure behavior of the items. Among the most commonly used statistical distributions... [Pg.820]

Monte Carlo is often the technique of last resort. It is used when the probabihty distributions are too arbitrary or when the system is too large and complex to be modeled and computed by other approaches. The Monte Carlo approach is usually used when failure rates are time varying. It can be used when maintenance is irregular or imperfect. The last subsection on statistics gave a short presentation of the Monte Carlo approach. [Pg.2274]

Turning to 90% coverage, the test sample would now need to exceed 20 failures (for reasonable statistical significance) and the FMEA would require a more detailed approach. In bofli cases the cost and time become more significant. An FMEA as illustrated in Appendix 4 is needed and might well involve three to four man-days. [Pg.65]

The Crow AMSSA is a statistical model which uses the Weibull failure rate function to describe the relationship between accumulated time to failure and test time, being a Non-Homogeneous Poisson Process Model. This approach is applied in order to demonstrate the effect of corrective and preventive actions on reliability when a product is being developed or for repairable systems during operation phase (Crow, 2012). Thus, whenever improvement is implemented during test (Test-Fix-Test) or maintenance, the Crow AMSAA model is appropriated to predict reliability growth and expected cumulative number of failures. The expected cumulative number of failures is mathematically represented by the following equation ... [Pg.227]

Proposed by authors approach is that a flow item is modeled by an object, in which labels describing the two statistical distributions are defined. The first statistical distribution defines MTBF (time between the occurrence of the next failure) and the second MTTR (the time required to repair the damage). These labels are defined and set at the moment of creation of an item flow (entrance to the analyzed system). The fact that an object is created dynamic ly, and labels describing the MTBF and MTTR definitions are defined when the object is created during a simulation experiment— it make different approach from standard solutions, used in traditional simulation programs. [Pg.2088]

To circumvent the coverage problem some approaches advocate to continuously monitor the used stack during normal operation of the system for a given period in time, aiming at a reliability metrics based on the time spent for measurements. However, in contrast to hardware metrics, the results are inconclusive since there is no indication how often a specific execution path has been exercised during the observation period, or whether it has been exercised at all. In consequence for software-based systems no statistical failure rates are available which are comparable to those used for hardware... [Pg.204]

These examples demonstrate that for complicated products or processes, 3a quality is no longer adequate, and there is no place for failure. These considerations and economic pressures have motivated the development of the six sigma approach (Pande et al., 2000). The statistical motivation for this approach is based on the properties of the normal distribution. Suppose that a product quality variable x is normally distributed, N x, a ). As indicated on the left portion of Fig. 21.7, if the product specifications are x 6a, the product will meet the specifications 99.999998% of the time. Thus, on average, there will only be two defective products for every billion produced. Now suppose that the process operation changes so that the mean value is shifted from x = x to either x = [x + 1.5a or X = [X - 1.5a, as shown on the right side of Fig. 21.7. Then the product specifications will still be satisfied 99.99966% of the time, which corresponds to 3.4 defective products per million produced. [Pg.421]


See other pages where Failure time, statistical approach is mentioned: [Pg.32]    [Pg.315]    [Pg.69]    [Pg.334]    [Pg.242]    [Pg.74]    [Pg.553]    [Pg.285]    [Pg.307]    [Pg.274]    [Pg.313]    [Pg.213]    [Pg.190]    [Pg.1302]    [Pg.262]    [Pg.1871]    [Pg.852]    [Pg.118]    [Pg.22]   
See also in sourсe #XX -- [ Pg.401 ]




SEARCH



Failure statistics

Failure time

© 2024 chempedia.info