Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probability assessment systems

The human health and environmental factors are then multiplied by the exposure potential which includes parameters expressing biological oxygen demand half-life, hydrolysis half-life and an aquatic bioconcentration factor. It is felt that this system is probably one of the better impact assessment systems available today because it assigns impact values based on quantitative scientific data rather than subjective concern over a chemical which is often based on perception rather than scientific data. On the other hand, the bioaccumulation and persistence factors have already been shown to be not particularly relevant to metals per se. In the future, alternative evaluation systems such as solubility and transformation characteristics of metals and metal compounds, and models such as the biotic ligand model will be found to be much more appropriate for evaluating the human health and environmental impacts of battery metals. [Pg.29]

One of the most important safety related systems is reactor trip system (RTS). RTS malfunction probability assessment is based on knowledge of malfimction its components and on reliability analysis of its functions. Solution of this task is described in (Fuchs et al. 2007). Input parameters of this task are component failures data (see table 1) and output parameters are malfunction probabilities of RTS functions, see Table 2. [Pg.1110]

Figure 3.4 presents a fast learner who achieves the final performance level earlier than on average (dotted, highest curve), a so-called slow starter who needs more time and who shows an intermediate learning plateau (continuous, middle curve), and a learner who never achieves the required level and probably fails (striped, lowest curve). Ideally, the assessment system should optimally reflect the kind of learning curves depicted here, but our assessment system cannot produce these learning curves in the same way. [Pg.32]

Some risk assessment systems include numerical categories for probability and severity levels and computations are made to arrive at a number that determines the risk level. Arriving at those numerical categories is entirely judgmental. Some of those numerical risk assessment systems are discussed in Chapter 10, Three-and Four-Dimensional Numerical Risk-Scoring Systems. ... [Pg.101]

For this primer, two-dimensional risk assessment matrices are discussed. They are displays of variations for two categories of terms the severity of harm or damage that could result from a hazards-related incident or exposure, and the probability that the incident or exposure could occur. They also show the risk levels that derive from the various combinations of severity and probability. A review of three- and four-dimensional risk assessment systems is given in Chapter 10, Three- and Four-Dimensional Numerical Risk-Scoring Systems. ... [Pg.117]

For non-carcinogens, MCLGs are determined by a three-step process. The first step is calculating the reference dose (RfD) for each specific confaminanf. The RfD is an estimate of the amount of a chemical that a person can be exposed to on a daily basis that is not anticipated to cause adverse systemic health effects over the person s lifetime. A different assessment system is used for chemicals that are potential carcinogens. If toxicological evidence leads to the classification of the contaminant as a human or probable human carcinogen, the MCLG is set at zero (Boyce, 1997). [Pg.194]

Deciding the likelihood of a hazard causing injury or damage is difficult. Whatever system is chosen it should be remembered that human beings do make errors, whether they be risk estimators or those at risk, so any probability assessment carries with it a degree of uncertainty. [Pg.183]

Step 3 Having the sublanguage defined for each state of the considered system we can obtain firstly the probabihty for a sublanguage and secondly the probabilities for events sequences belonging to this sublanguage. To illustrate the sequence probabilities assessment, the results obtained for two sublanguages (Lj and L ) are presented in this paper. [Pg.222]

We have presented a new technique for quantitative analysis of failure behaviour for systems, based on architectural models. The proposed technique enables the assessment of failure behaviour from the analysis of components of the system, and can assess the probability of system-level failures based on failures of components. The approach has been connected to a probabilistic model checker, which allows verification of the failure models, but also helps to calculate input and output token sets and helps in exploring the model. Importantly, the approach is compositional, and can be applied to individual components and collections. [Pg.227]

Accident modelling, consequence evaluation and assessments of probabilities. Here platform design and operational modes serve as important input. Probability assessments are based on generic accident statistics, averaging over different types of platform design, barrier types and safety management systems. [Pg.323]

Using the idea of possible perfection has two ramifications on a safety case. One is that the upper level assessment of the probability of system failure will employ probabilities of software perfection the other is that the subcase concerned with software must consider the possibility (and probabilities) of its own imperfections. These are likely to be smaller when parts of the case, particularly any verifications and analyses, are formalized and subject to mechanical checking. I suggest considerations for the assessment of these probabilities in a recent paper (Rushby 2009b). [Pg.15]

The reason that many fault-tolerant systems fail is that their components fail in ways different than assumed in the design of the mechanism for fault tolerance. When the fault-tolerance aspects of the safety case are informal, the failure assumptions may be imprecise, and their probabilities assessed optimistically (Johnson and Holloway 2006). Formal verification forces precision in the statement of failure mode assunqitions and, thereby, explicit recognition of the cases not tolerated - and realistic assessment of their probability. The latter should drive the design of fault-tolerant mechanisms toward those that make minimal assumptions and are uniformly effective (e.g., Byzantine-resilient algorithms) and away finm the special-case treatments that are prevalent in homespun designs. [Pg.15]

The Reactor Safety Study attempted to make a realistic estimate of the potential effects of LWR accidents on the public health and safety. One BWR, Peach Bottom Unit 2, and one PWR, Surry Unit 1, were analyzed in detail. The Reactor Safety Study team used previous information from Department of Defense and NASA to predict the effect of failures of small components in large, complex systems. Events that could potentially initiate core melt accidents were first identified. Event trees were then used to delineate possible sequences of successes or failures of systems provided to prevent core meltdown and/or the release of radionuclides. Fault trees were used to estimate the probabilities of system failures from available data on the reliability of system components. Using these techniques, thousands of possible core melt accident sequences were assessed for their occurrence probabilities. The consequences of such accident sequences were then estimated to complete the risk assessment. [Pg.52]

The difference between then and now is that, due to the increase in system complexity, an instinctive feel of probability has been replaced by quantitative probability assessment. The use of probability evaluations is particularly appropriate in the following instances (refer, Lloyd and Tye, 1995 (p. 105) and Cherry (1995)) ... [Pg.148]

It is recommended that flammability always be assessed first, since it is inherently safer to avoid flammable atmospheres than to avoid sources of ignition such as static electricity. If a flammable atmosphere cannot be avoided at all times, the system should be designed to minimize both the probability and consequences of ignition. In this chapter it is assumed that static electricity is the only source of ignition however, in practical situations all sources of ignition such as those described in [ 157] should be evaluated. [Pg.47]

Human reliability [lata NJUREG/CR-1278 was supplemented by judgment of system analysts and plant personnel. Human error probabilities were developed from NUREG/CR-12 8, human action time windows from system analysis and some recovery limes from analysis of plant specific experience. Data sources were WASH-1400 HEPs,Fullwood and Gilbert assessment ot I S power reactor Bxp., NUREG/ CR -127K. and selected acro ptice li.it.j... [Pg.182]

Chapter 4 focuses on techniques which are applied to a new or existing system to optimize human performance or qualitatively predict errors. Chapter 5 shows how these teclmiques are applied to risk assessment, and also describes other techniques for the quantification of human error probabilities. Chapters 6 and 7 provide an overview of techniques for analyzing the underlying causes of incidents and accidents that have already occurred. [Pg.3]

Quantitative human reliability data collection systems for generating human error probabilities for use in quantitative risk assessment. [Pg.248]

Once tlie system components and their failure modes have been identified, tlie acceptability of risks taken as a result of such failures must be determined. Tlie risk assessment process yields more comprehensive and better results when reliable statistical and probability data are available. In tlie absence of such data, tlie results are a strong function of tlie engineering judgment of tlie design team. The important issue is tliat both tlie severity and probability (frequency) of the accident must be taken into account. [Pg.519]

Table 18.4.1 smiinuuizes another inetliod of risk assessment tliat can be applied to an accident system failure. Both probability and consequence have been ranked on a scale of 0 to 1 witli table entries being the sum of probability and consequence. The acceptability of risk is a major decision and can be described by dividing tlie situations presented in Table 18.4.1 into unacceptable, marginally acceptable, and acceptable regions. Figiue 18.4.2 graphically represents tliis risk data. ... [Pg.519]

Neutralizing capacity is not the only measure of a required amine feed rate. Once all acidic characteristics have been neutralized, amine basicity becomes the important issue because this raises the pH above the neutralization point, to a more stable and sustainable level. Consequently, in practice we are concerned with the level of amine necessary to raise the condensate pH to a noncorrosive level. This practical amine requirement is difficult to obtain from theoretical calculations because it must take account of the amine volatility, DR, and the boiler system amine recycling factor (as well as temperature). As noted earlier, the basicity of an amine has little or no relationship to its volatility or DR, so that reliable field results are probably a more important guide in assessing the suitability of an amine product than suppliers tables. [Pg.523]


See other pages where Probability assessment systems is mentioned: [Pg.249]    [Pg.266]    [Pg.4788]    [Pg.1474]    [Pg.1480]    [Pg.226]    [Pg.3]    [Pg.466]    [Pg.1445]    [Pg.247]    [Pg.14]    [Pg.1012]    [Pg.127]    [Pg.361]    [Pg.2502]    [Pg.418]    [Pg.65]    [Pg.96]    [Pg.207]    [Pg.245]    [Pg.107]    [Pg.101]    [Pg.285]    [Pg.268]    [Pg.512]    [Pg.261]    [Pg.384]    [Pg.222]    [Pg.412]   


SEARCH



Assessment system

Probability systems

© 2024 chempedia.info