Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hardware reliability

From a reliability engineering perspective, error can be defined by analogy with hardware reliability as "The likelihood that the human fails to provide a required system function when called upon to provide that fimction, within a required time period" (Meister, 1966). This definition does not contain any references to why the error occurred, but instead focuses on the consequences of the error for the system (loss or unavailability of a required function). The disadvantage of such a definition is that it fails to consider the wide range of other actions that the human might make, which may have other safety implications for the system, as well as not achieving the required function. [Pg.39]

It was concluded from the analysis that blowdown response time was affected more by the attitude of the platform personnel toward the system than the reaction times of the system components. Therefore the implementation of semi or fully automatic blowdown on the platforms would not necessarily enhance performance unless the workers had support in terms of philosophy, training, procedures, and hardware reliability. [Pg.342]

New Drop Test Fixture. A test tower (patented) is claimed to keep expl hardware reliably in the five attitudes required by the MIL-STD-33I drop test (nose-up, nose-down, horizontal, nose 45° up and nose 45° down). Hardware is kept in position by having fixture fall with it for 32 of the 40 feet Refs 1) Atlas/Aero space, No 8, Dec 1969. Atlas Chemical Industries, Inc., Valley Forge, Pa, 1948 1 2) E P 3(2)(1970), p3... [Pg.449]

Before your purchase, take the time to call some other people who already have this model integrator. Ask a few pertinent questions pertaining to hardware reliability, maintenance, and service. You may wish to reconsider your choice if you find that the printer mechanism is a constant maintenance problem or that you have to ship the unit cross country, paying insured postage rates both ways or you may find out that local service is available but unreliable. [Pg.436]

As a general rule the initial provision was of UK-manufactured machines. The smaller ones were comparable to the IBM 709 and the larger ones were about of IBM 7090 or 7094 power. It would probably be agreed now that most of the machines provided were not wonderfully satisfactory, either in terms of hardware reliability or in software provision. But by the late 1960s most of them had been got to work in a satisfactory manner. But, alas, this was just as their manufacturers were going out of business. After 1968, whatever UK computer a university had, they had to deal with ICL who had taken over all other UK computer manufacturers. Thus to add to the anguish of the users over the machines themselves was the difficulty of dealing with an essentially monopoly supplier. [Pg.289]

The second situation is the simplest one, and it is similar to basic hardware reliability theory. In this case, the failure intensity can be modeled as constant for every release of the software. In the first case, however, the failure intensity cannot be modeled as being constant. Here, it is a function of how many failures that have been removed. This is a major difference compared to basic hardware reliability theory, where components are not improved every time they are replaced. [Pg.326]

An easy to use functional model was developed in order to analyze both systems the same way, which is linked up with hardware reliability. The hardware reliability is predicted with a reliabihty prognosis model, developed by the University of Wuppertal and an important automotive supplier (Ref. 1 to 5)... [Pg.1469]

It is not very easy to directly determine the functional failure behaviour, because usually the loss of a function is not recorded. In this article, an easy to handle functional analysis is presented. By using a V-Model and the component reliability, the functions can be modeled and the functional failure probability can be determined by using the hardware reliability. Afterwards, a lifetime distribution is fitted to the functional failure distribution so that the function can be described and compared by this distribution. [Pg.1469]

At the beginning, regular hardware reliability progress meetings have been organised between the Customer and the Main Supplier to discuss and resolve reliability weaknesses that have occurred in the field. Later, these meetings have been further extended to software problems, and finally they have been defined as PPMs considering all relevant operational, functional and RAM performances. [Pg.2182]

A great deal of progress has been made toward improving and evaluating the reliability of hardware systems however, the place where systems most frequently fail is in the interface of humans with the system. Human reliability is generally much lower and more difficult to control than hardware reliability. [Pg.135]

Explain why human reliability is generally much lower and much more difficult to predict than hardware reliability. [Pg.145]

The sweeping transformation of the British ESI that began in 1988 for the purpose of privatizing the electricity sector and increasing market efficiency had substantial impacts on hardware reliability, human factors, and safety regulation in the U.K. nuclear power industry. Those impacts are each briefly discussed below. [Pg.166]

A final consideration in risk projection is the important role of the human in risk evaluation and projections. Overall system reliability is totally dependent on the humans who design, install, operate, and maintain the systems. The high degree of variability in both human performance and in the conditions under which human reliability data have been obtained in the past means that sophisticated analysis is necessary to obtain valid human reliability data. Such validity is necessary because these data represent as important an input to the risk projection models as the hardware, reliability data. [Pg.610]

Analysis, in this context, covers any proof of requirements satisfaction that is obtained from the design or other representation of the product, including models, prototypes, software source code etc. It includes, for example, simulation, formal proof, hardware reliability prediction, inspection, and software static and dynamic code analysis. [Pg.119]

Human error quantification is one of the most bitterly disputed areas of risk analysis. Risk analysts starting from the reasonably successful experience of quantifying hardware reliability try to treat human reliability in the same way, so that it can be integrated into their fault tree and event tree analyses. Psychologists doubt the possibility of doing this because ... [Pg.263]

A major issue in software is the independence of different versions of the same program. Initially, software reliability was treated similarly to hardware reliability, but software is now recognized as having different characteristics. The main item is that hardware has components that are used over and over (tried and true), whereas most software is custom. [Pg.2271]

The definition of software reliability is willingly similar to that of hardware reliability in order to compare and assess the reliability of systems composed of hardware and software components. One should note, however, that in some environments and for some applications such as scientific applications, time (and more precisely time between failures) is an irrelevant concept and should be replaced with a specified number of runs. The concept of software reliability is, for some, difficult to understand. For software engineers and developers, software has deterministic behavior, whereas hardware behavior is partly stochastic. Indeed, once a set of inputs to the software has been selected, once the computer and operating system with which the software will run is known, the software will either fail or execute correctly. However, our knowledge of the inputs selected, of the computer, of the operating system, and of the nature and position of the fault is uncertain. We can translate this uncertainty into probabUities, hence the notion of software reliability as a probability. [Pg.2296]

Alternatively, for demand mode safety functions, the hardware safety requirement can be expressed in terms of PFDavg. This allows credit to be taken for periodic proof testing, thereby relaxing the hardware reliability requirement. However, to take advantage of this approach it is necessary to use more elaborate models to estimate PFDavg(achieved) than are used in the low demand region. Such models are described elsewhere (Sato, 1999). [Pg.127]

Reliability is defined as The probability of performing the intended purpose adequately for the period of time intended under the operating conditions encountered . Neglecting any human contribution, the overall system reliability can be considered as the product of the hardware reliabihty and the software reliability. Hardware reliability is determined statistically and is related to random failure of individual components. To ensure that any... [Pg.250]

A standard method of increasing hardware reliability is to introduce redundancy. This gives the possibility of creating a more reliable system from less reliable components. Consider a very simple example where two components are coimected in series as shown in Figure 10.1. Intuitively we can see that if either component 1 or 2 fails then the whole system will fail. Mathematically, if the reliabilities of the two components are given by R j and R then the total rehability is given by... [Pg.251]

Regardless of the hardware reliability calculated for the design, the Standard specifies minimum levels of redundancy coupled with given levels of fault tolerance (described by the SFF). This can be estimated as shown in Appendix 4. [Pg.63]

This example provides a Safety Integrity Level (SIL) assessment of the proposed flood gate control system (FGCS) at a hydro-electric dam, demonstrating that it meets the identified hardware reliability and minimum configuration requirements in accordance with lEC 61508. [Pg.253]

The results of the assessment (Table 16.5) demonstrate that, based on the assumptions, the specified SlFs meet the hardware reliability and architectural requirements of the targets identified by the LOPA. [Pg.255]

Faults in the software executed by the FACTS devices can lead to failures that can affect the performance of the power grid. Here, we focus on failures in software, rather than hardware, since hardware reliability is a well studied area and hardware failures can be mitigated by redundancy. This section extends the analysis of the previous section to three non-trivial failure modes of FACTS devices. A system reliability model is developed for each failure mode. [Pg.263]


See other pages where Hardware reliability is mentioned: [Pg.168]    [Pg.327]    [Pg.63]    [Pg.301]    [Pg.167]    [Pg.215]    [Pg.2257]    [Pg.2281]    [Pg.132]    [Pg.133]    [Pg.28]    [Pg.12]    [Pg.30]    [Pg.61]    [Pg.158]    [Pg.160]    [Pg.426]    [Pg.283]    [Pg.260]    [Pg.260]    [Pg.264]   
See also in sourсe #XX -- [ Pg.251 ]




SEARCH



Hardware

© 2024 chempedia.info