Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probability estimation

Human error probabilities can also be estimated using methodologies and techniques originally developed in the nuclear industry. A number of different models are available (Swain, Comparative Evaluation of Methods for Human Reliability Analysis, GRS Project RS 688, 1988). This estimation process should be done with great care, as many factors can affect the reliability of the estimates. Methodologies using expert opinion to obtain failure rate and probability estimates have also been used where there is sparse or inappropriate data. [Pg.2277]

The Revea-nd Thomas Bayes, in a posthumously published paper (1763)., pren ided a systematic framework for the introduction of prior knowledge into probability estimates (C rellin, 1972), Indeed, Bayesian methods may be viewed as nothing more than convoluting two distributions. If it were this simple, why the controversy ... [Pg.50]

The human error probabilities estimated for a given task can now be modified to reflect the actual performance situation. For example, if the labeling scheme at a particular plant is very poor, the probability should be increased towards an upper bound. If the tagging control system at a plant is particularly good, the probability for certain errors should be decreased toward a lower bound. [Pg.175]

Voska, K.J. and J.N. O Brien, Human Error Probability Estimation Using Licensee Event Reports, BNL, July 1984. [Pg.470]

In April 1982, a data workshop was held to evaluate, discuss, and critique data in order to establish a consensus generic data set for the USNRC-RES National Reliability Evaluation Program (NREP). The data set contains component failure rates and probability estimates for loss of coolant accidents, transients, loss of offsite power events, and human errors that could be applied consistently across the nuclear power industry as screening values for initial identification of dominant accident sequences in PRAs. This data set was used in the development of guidance documents for the performance of PRAs. [Pg.82]

Combinations of weather conditions, wind speed and wind direction along witli boiling point, vapor density, diffusivity, and heat of vaporization of tlie chemical released vary the healtli impact of tlie released chemical on the nearby population. To model a runaway reaction, the release of 10,000 gallons was assumed to occur over a 15-minute period. Tlie concentration of the chemical released was estimated, using procedures described in Part III (Chapter 12) for each combination of weather condition, wind speed, and wind direction. The results, combined with population data for tlie area adjacent to tlie plant, led to probability estimates of the number of people affected. Table 21.5.3 sunimarizes tlie findings. [Pg.623]

It can be noticed from the conditional probability estimates that one should expect to get almost only good y values while operating inside these zones of the decision space, as opposed to the current operating conditions, which lead to just 40% of good y values. [Pg.117]

There are various severity of illness scoring systems for sepsis and trauma (R11). Severity scoring can be used, in conjunction with other risk factors, to anticipate and evaluate outcomes, such as hospital mortality rate. The most widely used system is the Acute Physiology, Age, Chronic Health Evaluation II (APACHE II) classification system (K12). The APACHE III was developed to more accurately predict hospital mortality for critically ill hospitalized adults (K13). It provides objective probability estimates for critically ill hospitalized patients treated in intensive care units (ICUs). For critically ill posttrauma patients with sepsis or SIRS, another system for physiologic quantitative classification and severity stratification of the host defense response was described recently (R11). However, this Physiologic State Severity Classification (PSSC) has yet not been applied routinely in ICU setting. [Pg.57]

After repair, the component is returned to the working state. Minimal cut set analysis is a mathematical technique for developing and providing probability estimates for the combinations of basic component failures and/or human error probabilities, which are necessary and sufficient to result in the occurrence of the top event. [Pg.50]

Summation of Risk - The combination of severity and probability estimates an incident to occur. [Pg.89]

The 13C NMR study utilized cross polarization-magic angle spinning (CP-MAS) with spin counting. The elemental and functional group analyses provided input for a series of analytical constraints calculations that yield an absolute upper limit for the amount of aromatic carbon and most probable estimates of both aromatic and non-carboxyl aliphatic carbon in each sample. Spin counting experiments demonstrate that less than 50% of the... [Pg.282]

In zinc-deficient rats, the zinc content of the epididymis was only about half the normal value,28,87 and the level of a-D-mannosidase activity was also little over half the value usually observed.26 However, the zinc concentration of the tissue was still in vast excess over that required for stoichiometric combination with the enzyme protein, calculated on any probable estimate of its specific activity and molecular weight (see Section III,5 p. 433). [Pg.436]

An obvious objection to these claims is that, in the face of unemployment, stock market crashes and the tike, it is wildly implausible to say that people are making correct guesses about what will happen. Surely, these consequences cannot have been fully foreseen. Rational-expectation theorists respond by saying that anticipations are more complex. People do not anticipate future events as if they were certain to happen. Rather, they form probability estimates over the many future events that can happen. These estimates are rational, in the sense of taking account of all available information and not being subject to systematic... [Pg.117]

In designing and analyzing the various aspects of a trial, the statistician uses inferential reasoning and methodology. The formation of a testable hypothesis, the creation of a protocol for hypothesis testing, probability estimation, and decision-making all involve inferential reasoning. [Pg.292]

Zheng M, Liu Z, Xue C, Zhu W, Chen K, Luo X, Jiang H. Mutagenic probability estimation of chemical compounds by a novel molecular electrophilicity vector and support vector machine. Bioinformatics 2006 22 2099-2106. [Pg.279]

From guidelines in Section 19.10, a 50% probability estimate based on Phase 0 information would need 22% contingency. [Pg.452]

Equation (5.1-7) allows very detailed estimation. Maximizing p 0 y) gives a modal (most probable) estimate 0 of the parameter vector. Integration of p 0 y) over any region of 0 gives the posterior probability content of that region. Thus, one can make very direct probability statements about the parameters of a postulated model using all the available information. [Pg.79]


See other pages where Probability estimation is mentioned: [Pg.176]    [Pg.228]    [Pg.241]    [Pg.95]    [Pg.115]    [Pg.498]    [Pg.150]    [Pg.114]    [Pg.77]    [Pg.162]    [Pg.170]    [Pg.123]    [Pg.97]    [Pg.387]    [Pg.240]    [Pg.42]    [Pg.183]    [Pg.487]    [Pg.440]    [Pg.113]    [Pg.34]    [Pg.24]    [Pg.68]    [Pg.623]    [Pg.2032]    [Pg.13]    [Pg.182]    [Pg.440]    [Pg.623]   
See also in sourсe #XX -- [ Pg.204 , Pg.205 ]




SEARCH



© 2024 chempedia.info