Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Errors probability

The distribution of the /-statistic (x — /ji)s is symmetrical about zero and is a function of the degrees of freedom. Limits assigned to the distance on either side of /x are called confidence limits. The percentage probability that /x lies within this interval is called the confidence level. The level of significance or error probability (100 — confidence level or 100 — a) is the percent probability that /X will lie outside the confidence interval, and represents the chances of being incorrect in stating that /X lies within the confidence interval. Values of t are in Table 2.27 for any desired degrees of freedom and various confidence levels. [Pg.198]

Cullinan presented an extension of Cussler s cluster diffusion the-oiy. His method accurately accounts for composition and temperature dependence of diffusivity. It is novel in that it contains no adjustable constants, and it relates transport properties and solution thermodynamics. This equation has been tested for six very different mixtures by Rollins and Knaebel, and it was found to agree remarkably well with data for most conditions, considering the absence of adjustable parameters. In the dilute region (of either A or B), there are systematic errors probably caused by the breakdown of certain implicit assumptions (that nevertheless appear to be generally vahd at higher concentrations). [Pg.599]

High Fins To calculate heat-transfer coefficients for cross-flow to a transversely finned surface, it is best to use a correlation based on experimental data for that surface. Such data are not often available, and a more general correlation must be used, making allowance for the possible error. Probably the best general correlation for bundles of finned tubes is given by Schmidt [Knltetechnik, 15, 98-102, 370-378 (1963)] ... [Pg.1052]

Human error probabilities can also be estimated using methodologies and techniques originally developed in the nuclear industry. A number of different models are available (Swain, Comparative Evaluation of Methods for Human Reliability Analysis, GRS Project RS 688, 1988). This estimation process should be done with great care, as many factors can affect the reliability of the estimates. Methodologies using expert opinion to obtain failure rate and probability estimates have also been used where there is sparse or inappropriate data. [Pg.2277]

Random Measurement Error Third, the measurements contain significant random errors. These errors may be due to samphng technique, instrument calibrations, and/or analysis methods. The error-probability-distribution functions are masked by fluctuations in the plant and cost of the measurements. Consequently, it is difficult to know whether, during reconciliation, 5 percent, 10 percent, or even 20 percent adjustments are acceptable to close the constraints. [Pg.2550]

Phase three of a typical HRA begins with developing human error probabilities that can he applied to the selected model. In some cases, a set of nominal human errors can be derived 1mm plant data, however, due to the sparseness and low confidence of these data industry generic information may be used. Chapter 20 of NUREG/CR-1278 includes a typical set of. such data. [Pg.175]

The human error probabilities estimated for a given task can now be modified to reflect the actual performance situation. For example, if the labeling scheme at a particular plant is very poor, the probability should be increased towards an upper bound. If the tagging control system at a plant is particularly good, the probability for certain errors should be decreased toward a lower bound. [Pg.175]

In any given situation, there may be different levels of dependence between an operator s performance on one task and on another because of the characteristics of the tasks theraseb e.s. or because of the manner in which the operator was cued to perform the tasks. Dependence levels between the performances of two (or more) operators also may differ. The analyses should account for dependency in human-error probabilities. In addition, each sequence may have a set of human recovery actions that if successfully performed will terminate or reduce the consequences of the sequence. This information, coupled with a knowledge of the system success criteria leads to the development of human success and failure probabilities which are input to the quantification of the fault iices or event trees. With this last step, the HRA is integrated into the PSA, and Pl. ise 4 is complete. [Pg.175]

THERP (NUREG/CR-1278), is used to estimate HEPs for a risk assessment. It provides error probabilities for generic tasks and describes the process used to modify these rates depending on the specific performance shaping factors (PSFs) involved in the task... [Pg.178]

List of the generic human error probabilities used to determine a base error rate for each human error considered, and... [Pg.178]

Table 4.5-11 Sample o/NUCLARR Human Error Probability Data iNUREG/CR-4639f... Table 4.5-11 Sample o/NUCLARR Human Error Probability Data iNUREG/CR-4639f...
The development of the HRA event tree is one of the most critical parts of the quantification of human error probabilities. If the task analysis lists the possible human error events in the order of ihcir potential occurrence, the transfer of this information to the HRA event tree is fadlitutcd. Each potential eiTor and success is represented as a binary branch on the HRA event tiec. with subsequent errors and successes following directly from the immediately preceding ones. Cure should be taken not to omit the errors that are not included in the task analysis table but might affect the probabilities listed in the table. For example, administrative control errors that affect a task being performed may not appear in the task analysis table but must be included in the HRA event tree. [Pg.181]

Human reliability [lata NJUREG/CR-1278 was supplemented by judgment of system analysts and plant personnel. Human error probabilities were developed from NUREG/CR-12 8, human action time windows from system analysis and some recovery limes from analysis of plant specific experience. Data sources were WASH-1400 HEPs,Fullwood and Gilbert assessment ot I S power reactor Bxp., NUREG/ CR -127K. and selected acro ptice li.it.j... [Pg.182]

Operator error probabilities were estimated using NUREG/CR-4910 normalized to errors determined in the internal events analysis. This allowed for varying number of personnel, amount of time available, and stress level. When more pessimistic values were substituted for best estimate values, the calculated core melt frequency increased by a factor of at least three. [Pg.419]

Operator error probability under stressful conditions depends upon the time to complete a sequence of events. The total time available for limiting sequences was 7.9 hours which is the time to completely drain the cooling water basin. Sequence (6) required the most operator actions (11) was... [Pg.419]

Voska, K.J. and J.N. O Brien, Human Error Probability Estimation Using Licensee Event Reports, BNL, July 1984. [Pg.470]

V) Reece, W. J. et al., Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR), Part. 2 Human Error Probability Data (HEP), 1994. [Pg.470]

Wreathall, J., 1982, Operator Action Trees, An Approach to Quantifying Operator Error Probability during Accident Sequences, NUS Report 4655, July. [Pg.492]

Chapter 5, Quantitative and Qualitative Prediction of Human Error in Safety Assessments, describes a systematic process for identifying and assessing the risks from human error, together with techniques for quantifying human error probabilities. [Pg.2]

Chapter 4 focuses on techniques which are applied to a new or existing system to optimize human performance or qualitatively predict errors. Chapter 5 shows how these teclmiques are applied to risk assessment, and also describes other techniques for the quantification of human error probabilities. Chapters 6 and 7 provide an overview of techniques for analyzing the underlying causes of incidents and accidents that have already occurred. [Pg.3]

Quantify error probabilities for these error modes using methods described in Chapter 5. [Pg.84]

While OAETs are best used for the qualitative insights that are gained, they can also be used as a basis for the quantitative assessment of human reliability. By assigning error probabilities to each node of the event tree and then multiplying these probabilities, the probability of each event state can be evaluated (see Chapter 5). [Pg.169]

These PIFs represent the major factors deemed by the analyst to influence error probability for the operations (coupling hoses, opening and closing valves) and planning activities being carried out within the tasks analyzed at... [Pg.217]

In numerical terms, the probability of each failure state is given by the following expressions (where SP is the success probability and HEP the human error probability at each node) ... [Pg.222]

Because most research effort in the human reliability domain has focused on the quantification of error probabilities, a large number of techniques exist. However, a relatively small number of these techniques have actually been applied in practical risk assessments, and even fewer have been used in the CPI. For this reason, in this section only three techniques will be described in detail. More extensive reviews are available from other sources (e.g., Kirwan et al., 1988 Kirwan, 1990 Meister, 1984). Following a brief description of each technique, a case study will be provided to illustrate the application of the technique in practice. As emphasized in the early part of this chapter, quantification has to be preceded by a rigorous qualitative analysis in order to ensure that all errors with significant consequences are identified. If the qualitative analysis is incomplete, then quanhfication will be inaccurate. It is also important to be aware of the limitations of the accuracy of the data generally available... [Pg.222]

The decomposition approach is used, it is necessary to represent the way in which the various task elements and other possible failures are combined to give the failure probability of the task as a whole. Generally, the most common form of representation is the event tree (see Section 5.7). This is the basis for THERP, which will be described in the next section. Fault trees are only used when discrete human error probabilities are combined with hardware failure probabiliHes in applications such as CPQRA (see Figure 5.2). [Pg.226]

Error probabilities that are used in decomposition approaches are all derived in basically the same manner. Some explicit or implicit form of task classification is used to derive categories of tasks in the domain addressed by the technique. For example, typical THERP categories are selections of switches from control panels, walk-around inspections, responding to alarms and operating valves. [Pg.226]

Modify the value obtained from the previous stage to reflect possible dependencies among error probabilities assigned to individual steps in the task being evaluated. A dependence model is provided which allows for levels of dependence from complete dependence to independence to be modeled. Dependence could occur if one error affected the probability of subsequent errors, for example if the total time available to perform the task was reduced. [Pg.229]

Combine the modified probabilities to give the overall error probabilities for the task. The combination rules for obtaining the overall error probabilities follow the same addition and multiplication processes as for standard event trees (see last section). [Pg.229]

INTEGRATION WITH HARDWARE ANALYSIS. The error probabilities obtained from the quantification procedure are incorporated in the overall system fault trees and event trees. [Pg.229]

ERROR REDUCTION STRATEGIES. If the error probability calculated by the above procedures leads to an unacceptable overall system failure probability, then the analyst will reexamine the event trees to determine if any PIFs can be modified or task structures changed to reduce the error probabilities to an acceptable level. [Pg.229]

The first stage is to group together operations that are likely to be influenced by the same PIFs. The four operations in the above set all involve physical actions for which there is no immediate feedback when incorrectly performed. Two of the operations, 4.1.3 and 4.4.2 are noted in Figure 5.8 as having significant consequences if they occur. It is legitimate to assume therefore, that the error probability will be determined by the same set of PIFs for all the operations in this set. [Pg.235]


See other pages where Errors probability is mentioned: [Pg.2277]    [Pg.78]    [Pg.155]    [Pg.389]    [Pg.413]    [Pg.462]    [Pg.65]    [Pg.102]    [Pg.201]    [Pg.209]    [Pg.226]    [Pg.226]    [Pg.229]    [Pg.235]   
See also in sourсe #XX -- [ Pg.36 ]

See also in sourсe #XX -- [ Pg.116 , Pg.119 ]

See also in sourсe #XX -- [ Pg.270 , Pg.277 , Pg.278 , Pg.281 , Pg.284 ]

See also in sourсe #XX -- [ Pg.270 , Pg.277 , Pg.278 , Pg.281 , Pg.284 ]




SEARCH



Error maximum probable

Error probabilities, inferential

Error probability function

Error probability modulation

Human Error Probability Evaluation

Human error probability

Maximum probable error, averages

Normal error probability function

Probability of making a Type I error

Probable error

Probable error

Technique for Human Error Rate Probability

The Standard, Probable, and Other Errors

The probable error

© 2024 chempedia.info