Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Review of Probability Theory

Equipment failures or faults in a process occur as a result of a complex interaction of the individual components. The overall probability of a failure in a process depends highly on the nature of this interaction. In this section we define the various types of interactions and describe how to perform failure probability computations. [Pg.472]

Data are collected on the failure rate of a particular hardware component. With adequate data it can be shown that, on average, the component fails after a certain period of time. This is called the average failure rate and is represented by /a with units of faults/time. The probability that the component will not fail during the time interval (0, t) is given by a Poisson distribution1  [Pg.472]

The failure density function is defined as the derivative of the failure probability  [Pg.472]

The area under the complete failure density function is 1. [Pg.472]

The failure density function is used to determine the probability P of at least one failure in the time period t0 to q  [Pg.472]


The appendix gives a brief review of probability theory, applicable here. [Pg.128]

Wu, Ruff and Faethl249 made an extensive review of previous theories and correlations for droplet size after primary breakup, and performed an experimental study of primary breakup in the nearnozzle region for various relative velocities and various liquid properties. Their experimental measurements revealed that the droplet size distribution after primary breakup and prior to any secondary breakup satisfies Simmons universal root-normal distribution 264]. In this distribution, a straight line can be generated by plotting (Z)/MMD)°5 vs. cumulative volume of droplets on a normal-probability scale, where MMD is the mass median diameter of droplets. The slope of the straight line is specified by the ratio... [Pg.161]

Most physical chemistry laboratory experiments are concerned with measurements. There is a basic difference, however, between experiments performed at the undergraduate level and those performed in physical chemistry research laboratories. At the undergraduate level, students perform experiments that have a known outcome. Generally, these experiments have been performed many times over a number of years by numerous students. In research laboratories, on the other hand, scientists usually perform experiments on unknowns. There are no laboratory instructors from whom a research scientist can obtain the correct answer to an experimental measurement to see if he or she has performed the measurement correctly. Thus, it is important for students of physical chemistry, who hope someday to become proficient researchers, to learn how to determine the reliability of their experimental data. One common way to help determine the reliability of experimental data is to perform the experiment more than once. It is known that when a measurement is made more than once, the results scatter around some average value. We shall see in the next few sections that this experimental scatter can be used to help determine the probability that the average value is the true value. Before going into this, however, let us first review simple probability theory. [Pg.210]

Waller (NUREG/CR-4314) provides a concise review of USC with 143 references and relevant papers. He quotes Evans (1975) that no new theory is needed to analyze. system dependencies. The problem persists because the conditional probabilities are not known. Except for the bounding method used in WASH-1400, the other two methods presented below can be shown to be derivable from the theory of Marshall and Olkin (1967). Waller reviews methods other than the three presented here. They are not presented for absence of physical insight to the problem. [Pg.125]

A successful method to obtain dynamical information from computer simulations of quantum systems has recently been proposed by Gubernatis and coworkers [167-169]. It uses concepts from probability theory and Bayesian logic to solve the analytic continuation problem in order to obtain real-time dynamical information from imaginary-time computer simulation data. The method has become known under the name maximum entropy (MaxEnt), and has a wide range of applications in other fields apart from physics. Here we review some of the main ideas of this method and an application [175] to the model fluid described in the previous section. [Pg.102]

This nonequilibrium Second Law provides a basis for a theory for nonequilibrium thermodynamics. The physical identification of the second entropy in terms of molecular configurations allows the development of the nonequilibrium probability distribution, which in turn is the centerpiece for nonequilibrium statistical mechanics. The two theories span the very large and the very small. The aim of this chapter is to present a coherent and self-contained account of these theories, which have been developed by the author and presented in a series of papers [1-7]. The theory up to the fifth paper has been reviewed previously [8], and the present chapter consolidates some of this material and adds the more recent developments. [Pg.3]

The history of the observation of anomalous voltammetry is reviewed and an experimental consensus on the relation between the anomalous behavior and the conditions of measurement (e.g., surface preparation, electrolyte composition) is presented. The behavior is anomalous in the sense that features appear in the voltammetry of well-ordered Pt(lll) surfaces that had never before been observed on any other type of Ft surface, and these features are not easily understood in terms of current theory of electrode processes. A number of possible interpretations for the anomalous features are discussed. A new model for the processes is presented which is based on the observation of long-period icelike structures in the low temperature states of water on metals, including Pt(lll). It is shown that this model can account for the extreme structure sensitivity of the anomalous behavior, and shows that the most probable explanation of the anomalous behavior is based on capacitive processes involving ordered phases in the double-layer, i.e., no new chemistry is required. [Pg.37]

Finite-additive invariant measures on non-compact groups were studied by Birkhoff (1936) (see also the book of Hewitt and Ross, 1963, Chapter 4). The frequency-based Mises approach to probability theory foundations (von Mises, 1964), as well as logical foundations of probability by Carnap (1950) do not need cr-additivity. Non-Kolmogorov probability theories are discussed now in the context of quantum physics (Khrennikov, 2002), nonstandard analysis (Loeb, 1975) and many other problems (and we do not pretend provide here is a full review of related works). [Pg.109]

Fifty years have elapsed since the first major surge occurred in the development of the Athabasca oil sands. The main effort has been devoted to the development of the hot water extraction process 13 significant projects utilizing this process are reviewed in this paper. However, many other techniques have also been extensively tested. These are classified into several basic concepts, and the mechanism underlying each is briefly described. A critical review of K. A. Claries theories concerning the flotation of bitumen is presented, and his theories are updated to accommodate the different mechanisms of the primary and secondary oil recovery processes. The relative merits of the mining and in situ approaches are discussed, and an estimate is made of the probable extent of the oil sand development toward the end of this century. [Pg.88]

Land (1987) has reviewed and discussed theories for the formation of saline brines in sedimentary basins. We will summarize his major relevant conclusions here. He points out that theories for deriving most brines from connate seawater, by processes such as shale membrane filtration, or connate evaporitic brines are usually inadequate to explain their composition, volume and distribution, and that most brines must be related, at least in part, to the interaction of subsurface waters with evaporite beds (primarily halite). The commonly observed increase in dissolved solids with depth is probably largely the result of simple "thermo-haline" circulation and density stratification. Also many basins have basal sequences of evaporites in them. Cation concentrations are largely controlled by mineral solubilities, with carbonate and feldspar minerals dominating so that Ca2+ must exceed Mg2+, and Na+ must exceed K+ (Figures 8.8 and 8.9). Land (1987) hypothesizes that in deep basins devolatilization reactions associated with basement metamorphism may also provide an important source of dissolved components. [Pg.382]

Cohen (C55) reviewed theories of the expansion mechanism. Probably a majority of workers (e.g. Refs I9,I10,O16,O24,BI33) have attributed expansion to forces exerted by the growth of the ettringite crystals. Of other theories, the most significant is that proposed by Mehta (MlOO). who attributed it to imbibition of water by the gelatinous layer of colloidal ettringite. On this hypothesis, expansion does not occur with the larger crystals formed at low CaO concentrations because these do not form... [Pg.338]


See other pages where Review of Probability Theory is mentioned: [Pg.472]    [Pg.473]    [Pg.475]    [Pg.477]    [Pg.479]    [Pg.481]    [Pg.483]    [Pg.485]    [Pg.69]    [Pg.472]    [Pg.473]    [Pg.475]    [Pg.477]    [Pg.479]    [Pg.481]    [Pg.483]    [Pg.485]    [Pg.69]    [Pg.150]    [Pg.40]    [Pg.18]    [Pg.4]    [Pg.117]    [Pg.1069]    [Pg.248]    [Pg.7]    [Pg.187]    [Pg.33]    [Pg.349]    [Pg.69]    [Pg.126]    [Pg.283]    [Pg.86]    [Pg.45]    [Pg.290]    [Pg.502]    [Pg.15]    [Pg.248]    [Pg.239]    [Pg.256]    [Pg.300]    [Pg.190]    [Pg.122]    [Pg.129]    [Pg.280]    [Pg.126]    [Pg.37]    [Pg.239]   


SEARCH



Probability theory

Theory review

© 2024 chempedia.info