Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The conditional probability

1 MARKOV CHAINS DISCRETE IN TIME AND SPACE 2.1-1 The conditional probability [Pg.11]

The conception of conditional probability plays a fundamental role in Markov chains and is presented firstly. [Pg.11]

In observing the outcomes of random experiments, one is often interested how the outcome of one event Sjj is influenced by that of a previous event Sj. For example, in one extreme case the relation between Sk and Sj may be such that Sk always occurs if Sj does, while in the other extreme case, Sk never occurs if Sj does. The first extreme may be demonstrated by the amazing lithograph Waterfall by Escher [10, p.323] depicted in Fig.2-0. [Pg.12]

To characterize the relation between events Sk and Sj, the conditional probability o/Sk occurring under the condition that Sj is known to have occurred, is introduced. This quantity is defined [5, p.25] by the Bayes rule which reads  [Pg.13]

For the Escher s example, Eq.(2-1) reads, prob S2 I Si) = prob S2Si /prob Si. prob Sk I Sj is the probability of observing an event Sk under the condition that event Sj has already been observed or occurred, prob Sj is the probability of observing event Sj. SkSj is the intersection of events Sk and Sj, i.e., that both Sk and Sj may be observed but not simultaneously. prob SkSj) is the probability for the intersection of events Sk and Sj or the probability of observing, not [Pg.13]


The main point of our elaboration is, that the Gibbs measure (4) of the potential lattice under interest ctin be considered as a nontrivial prior in the Bayes formula for the conditional probability, applied to the problem of image restoration ... [Pg.114]

In the Maximum Entropy Method (MEM) which proceeds the maximization of the conditional probability P(fl p ) (6) yielding the most probable solution, the probability P(p) introducing the a priory knowledge is issued from so called ergodic situations in many applications for image restoration [1]. That means, that the a priori probabilities of all microscopic configurations p are all the same. It yields to the well known form of the functional 5(/2 ) [9] ... [Pg.115]

Flere g(r) = G(r) + 1 is called a radial distribution function, since n g(r) is the conditional probability that a particle will be found at fif there is another at tire origin. For strongly interacting systems, one can also introduce the potential of the mean force w(r) tln-ough the relation g(r) = exp(-pm(r)). Both g(r) and w(r) are also functions of temperature T and density n... [Pg.422]

The effective field is detennined by assuming that the conditional probabilities are the same, i.e. [Pg.518]

By expressing the mean-field interaction of an electron at r with the N- 1 other electrons in temis of a probability density pyy r ) that is independent of the fact that another electron resides at r, the mean-field models ignore spatial correlations among the electrons. In reality, as shown in figure B3.T5 the conditional probability density for finding one ofA - 1 electrons at r, given that one electron is at r depends on r. The absence of a spatial correlation is a direct consequence of the spin-orbital product nature of the mean-field wavefiinctions... [Pg.2163]

To exemplify both aspects of the formalism and for illustration purposes, we divide the present manuscript into two major parts. We start with calculations of trajectories using approximate solution of atomically detailed equations (approach B). We then proceed to derive the equations for the conditional probability from which a rate constant can be extracted. We end with a simple numerical example of trajectory optimization. More complex problems are (and will be) discussed elsewhere [7]. [Pg.264]

A related algorithm can be written also for the Brownian trajectory [10]. However, the essential difference between an algorithm for a Brownian trajectory and equation (4) is that the Brownian algorithm is not deterministic. Due to the existence of the random force, we cannot be satisfied with a single trajectory, even with pre-specified coordinates (and velocities, if relevant). It is necessary to generate an ensemble of trajectories (sampled with different values of the random force) to obtain a complete picture. Instead of working with an ensemble of trajectories we prefer to work with the conditional probability. I.e., we ask what is the probability that a trajectory being at... [Pg.266]

The definition of the above conditional probability for the case of Brownian trajectories can be found in textbooks [12], However, the definition of the conditional probability for the Newton s equations of motion is subtler than that. [Pg.268]

Consider a numerical solution of the Newton s differential equation with a finite time step - At. In principle, since the Newton s equations of motion are deterministic the conditional probability should be a delta function... [Pg.268]

This is the conditional probability that the system which was in state A at time zero will be in state B at time t. Note that we use the normalized conditional probability since the trajectory must end either at A or at B. [Pg.275]

If the above assumption is reasonable, then the modeling of most probable trajectories and of ensembles of trajectories is possible. We further discussed the calculations of the state conditional probability and the connection of the conditional probability to rate constants and phenomenological models. [Pg.279]

If the events are not independent, provision must be made for this, so we define a quantity called the conditional probability. For the probability of a head given the prior event of a head, this is written Ph/h> where the first quantity in the subscript is the event under consideration and that following the slash mark is the prior condition. Thus Pj h probability of a tail following... [Pg.454]

The probability Pjj as given by Eq. (7.32) is replaced by the conditional probability Piyu, which is defined as... [Pg.456]

The first rule states that the probability of A plus the probability of not-A (A) is equal to 1. The second rule states that the probability for the occurrence of two events is related to the probability of one of the events occurring multiplied by the conditional probability of the other event given the occurrence of the first event. We can drop the notation of conditioning on I as long as it is understood implicitly that all probabilities are conditional on the information we possess about the system. Dropping the /, we have the usual expression of Bayes rule. [Pg.315]

Consider an experiment that in N trials, evenLs A and B occur together N(A B) times, and event B occurs N(B) times. The conditional probability of A given B is equation 2.4-i or 2.4-2. Rearranging, results in equation 2.4-3. [Pg.41]

If the failure distribution of a component i.s exponential, the conditional probability of observing exactly M failures in test time t given a true (but unknown) failure rate A and a Poisson distribution, is equation 2.6-9. The continuous form of Bayes s equation is equation... [Pg.52]

Waller (NUREG/CR-4314) provides a concise review of USC with 143 references and relevant papers. He quotes Evans (1975) that no new theory is needed to analyze. system dependencies. The problem persists because the conditional probabilities are not known. Except for the bounding method used in WASH-1400, the other two methods presented below can be shown to be derivable from the theory of Marshall and Olkin (1967). Waller reviews methods other than the three presented here. They are not presented for absence of physical insight to the problem. [Pg.125]

Fleming et al. (1985) define A, as the independent failure rate and higher order effects in order of the Greek alphabet (skipping a). The conditional probability that a CCF is shared by one... [Pg.127]

Since the numerical estimate G r) is necessarily incomplete and inaccurate the inversion is not possible without any ambiguity. Gubernatis and coworkers now suggested resolving the ambiguity by choosing the most probable A consistent with the data G i.e., they chose the A that maximizes the conditional probability P[A G]. This is justified since A has the... [Pg.105]

The conditional probability of event B given A is denoted by P(B A) and defined as... [Pg.548]

Figure 5. The solid line shows the probability for findin the charge on a Cu site between q and q+dq in a fee disordered alloy with any concentration. The dotted lines show the conditional probabilities corresponding to sites with a concentration of Cu atoms on the nearest-neighbor shell of 100%, 75%, 50%, 25%, and 0%. Figure 5. The solid line shows the probability for findin the charge on a Cu site between q and q+dq in a fee disordered alloy with any concentration. The dotted lines show the conditional probabilities corresponding to sites with a concentration of Cu atoms on the nearest-neighbor shell of 100%, 75%, 50%, 25%, and 0%.
Fig. 5. The one that has non-zero values only near q=0.0 corresponds to twelve Cu atoms on the nn-shell, ci=100%. It is known that an atom on a site surrounded by like atoms behaves somewhat like an atom in a pure crystal, and would have little net charge. The conditional probability centered near q=0.2 corresponds to ci=0%, with all the neighboring atoms Zn. Such an atom behaves like a Cu impurity in a Zn crystal. The probabilities Pcu(ci,q) for ci=25%, 50%, and 75% have their centers between these limits, llte conditional probabilities have a structure themselves. Extrapolating, it should be possible to write Pcu(ci>q) a sum of the conditional probabilities Pcu(ci,C2,q) where C2 is the concentration of Cu atoms on the second nn-shell. That probability could, in turn, be written as the sum of probabilities PCu(ci,C2,C3,q), where eg is the concentration of Cu atoms on the third nn-shell. Fig. 5. The one that has non-zero values only near q=0.0 corresponds to twelve Cu atoms on the nn-shell, ci=100%. It is known that an atom on a site surrounded by like atoms behaves somewhat like an atom in a pure crystal, and would have little net charge. The conditional probability centered near q=0.2 corresponds to ci=0%, with all the neighboring atoms Zn. Such an atom behaves like a Cu impurity in a Zn crystal. The probabilities Pcu(ci,q) for ci=25%, 50%, and 75% have their centers between these limits, llte conditional probabilities have a structure themselves. Extrapolating, it should be possible to write Pcu(ci>q) a sum of the conditional probabilities Pcu(ci,C2,q) where C2 is the concentration of Cu atoms on the second nn-shell. That probability could, in turn, be written as the sum of probabilities PCu(ci,C2,C3,q), where eg is the concentration of Cu atoms on the third nn-shell.
PCu(ci,q) is clearly not a 5-function as has been suggested. Many more LSMS calculations would have to be done in order to determine the structure of Pcn(ci,q) for fee alloys in detail, but it is easier to see the structure in the conditional probability for bcc alloys. The probability Pcu(q) for finding a charge between q and q-t-dq on a Cu site in a bcc Cu-Zn alloy and three conditional probabilities Pcu(ci,q) are shown in Fig. 6. These functions were obtained, as for the fee case, by averaging the LSMS data for the bcc alloys with five concentrations. The probability function is not a uniform function of q, but the structure is not as clear-cut as for the fee case. The conditional probabilities Pcu(ci,q) are non-zero over a wider range than they are for the fee alloys, and it can be seen clearly that they have fine structure as well. Presumably, each Pcu(ci,q) can be expressed as a sum of probabilities with two conditions Pcu(ci,C2,q), but there is no reason to expect even those probabilities to be 5-functions. [Pg.8]

Conditional probabilities of failure can be used to predict the number of unfailed units that will fail within a specified period on each of the units. For each unit, the estimate of the conditional probability of failure within a specified period of time (8000 hours here) must be calculated. If there is a large number of units and the conditional probabilities are small, then the number of failures in that period will be approximately Poisson distributed (a special form of the normal distribution), with mean equal to the sum of the conditional probabilities, which must be expressed as decimals rather than percentages. The Poisson distribution allows us to make probability statements about the number of failures that will occur within a given period of time. [Pg.1050]

The notion of a conditional probability can be extended to more general events than the simple intervals discussed above as follows. Let fa,---, n> n+i> i n+m denote any n + m random variables and let An and Bm denote arbitrary sets of points in n and m-dimensional space respectively. We define the conditional probability of the event [, >. 3 in -d 33 given that the event -, +m] in Bm ... [Pg.150]


See other pages where The conditional probability is mentioned: [Pg.115]    [Pg.518]    [Pg.1541]    [Pg.267]    [Pg.267]    [Pg.274]    [Pg.277]    [Pg.69]    [Pg.46]    [Pg.193]    [Pg.23]    [Pg.246]    [Pg.566]    [Pg.9]    [Pg.65]    [Pg.1050]    [Pg.250]    [Pg.342]    [Pg.342]    [Pg.345]    [Pg.347]    [Pg.351]    [Pg.172]    [Pg.431]    [Pg.149]   


SEARCH



Conditional probability

FIGURE 5.2 Venn diagram illustrating the development of conditional probability

© 2024 chempedia.info