Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayes rule

One important rule that will be used often in the book is Bayes rule. This is a very useful way of manipulating probabilities  [Pg.545]


The first rule states that the probability of A plus the probability of not-A (A) is equal to 1. The second rule states that the probability for the occurrence of two events is related to the probability of one of the events occurring multiplied by the conditional probability of the other event given the occurrence of the first event. We can drop the notation of conditioning on I as long as it is understood implicitly that all probabilities are conditional on the information we possess about the system. Dropping the /, we have the usual expression of Bayes rule. [Pg.315]

For Bayesian inference, we are seeking the probability of a hypothesis H given the data D. This probability is denotedp(H D). It is very likely that we will want to compare different hypotheses, so we may want to compare p(Hi D) with p(H2 D). Because it is difficult to write down an expression forp(H D), we use Bayes rule to invert the probability of p(D H) to obtain an expression for p(H D) ... [Pg.315]

There is some confusion in using Bayes rule on what are sometimes called explanatory variables. As an example, we can try to use Bayesian statistics to derive the probabilities of each secondary structure type for each amino acid type, that is p( x r), where J. is a, P, or Y (for coil) secondary strucmres and r is one of the 20 amino acids. It is tempting to writep( x r) = p(r x)p( x)lp(r) using Bayes rule. This expression is, of course, correct and can be used on PDB data to relate these probabilities. But this is not Bayesian statistics, which relate parameters that represent underlying properties with (limited) data that are manifestations of those parameters in some way. In this case, the parameters we are after are 0 i(r) = p( x r). The data from the PDB are in the form of counts for y i(r), the number of amino acids of type r in the PDB that have secondary structure J.. There are 60 such numbers (20 amino acid types X 3 secondary structure types). We then have for each amino acid type a Bayesian expression for the posterior distribution for the values of xiiry. [Pg.329]

Before setting up priors and likelihoods, we can factor the joint probability of the core structure choice and the alignment t by using Bayes rule ... [Pg.336]

Unfortunately, some authors describing their work as Bayesian inference or Bayesian statistics have not, in fact, used Bayesian statistics rather, they used Bayes rule to calculate various probabilities of one observed variable conditional upon another. Their work turns out to comprise derivations of informative prior distributions, usually of the form piQi, 02,..., 0 1 = which is interpreted as the posterior distribution... [Pg.338]

For example, Stolorz et al. [88] derived a Bayesian formalism for secondary structure prediction, although their method does not use Bayesian statistics. They attempt to find an expression for / ( j. seq) = / (seq j.)/7( j.)//7(seq), where J. is the secondary structure at the middle position of seq, a sequence window of prescribed length. As described earlier in Section II, this is a use of Bayes rule but is not Bayesian statistics, which depends on the equation p(Q y) = p(y Q)p(Q)lp(y), where y is data that connect the parameters in some way to observables. The data are not sequences alone but the combination of sequence and secondary structure that can be culled from the PDB. The parameters we are after are the probabilities of each secondary structure type as a function of the sequence in the sequence window, based on PDB data. The sequence can be thought of as an explanatory variable. That is, we are looking for... [Pg.338]

Thompson and Goldstein [89] improve on the calculations of Stolorz et al. by including the secondary structure of the entire window rather than just a central position and then sum over all secondary strucmre segment types with a particular secondary structure at the central position to achieve a prediction for this position. They also use information from multiple sequence alignments of proteins to improve secondary structure prediction. They use Bayes rule to fonnulate expressions for the probability of secondary structures, given a multiple alignment. Their work describes what is essentially a sophisticated prior distribution for 6 i(X), where X is a matrix of residue counts in a multiple alignment in a window about a central position. The PDB data are used to form this prior, which is used as the predictive distribution. No posterior is calculated with posterior = prior X likelihood. [Pg.339]

A very useful relationship, often called Bayes rule, exists between the two conditional probabilities P n in An in Bm and P m in Bm conditional probability implies the relationships,... [Pg.150]

Bayes rule, Eq. (3-164), finds many applications in problems of statistical inference86 and signal detection theory,36 where the conditional probability on the right can be calculated directly in terms of the physical parameters of the problem, but where the quantity of real interest is the conditional probability on the left. [Pg.151]

Finally, we note ditional densities Bayes rule... [Pg.153]

Use acceptance-rejection procedure (e.g., Bayes rule) to evaluate predictions consistence with the site-specific model output data... [Pg.59]

Conjugate pair In Bayesian estimation, when the observation of new data changes only the parameters of the prior distribution and not its statistical shape (i.e., whether it is normal, beta, etc.), the prior distribution on the estimated parameter and the distribution of the quantity (from which observations are drawn) are said to form a conjugate pair. In case the likelihood and prior form a conjugate pair, the computational burden of Bayes rule is greatly reduced. [Pg.178]

Robust Bayes A school of thought among Bayesian analysts in which epistemic uncertainty about prior distributions or likelihood functions is quantified and projected through Bayes rule to obtain a class of posterior distributions. [Pg.182]

Note that p(xi rj) denotes the posterior probability calculated using Bayes rule and the above equations clearly convey the centroid aspect of the solution. [Pg.77]

This is often termed the Bayes rule for minimum error. An associated concept of likelihood ratio, /r, to segment an observed profile into two classes (Fig. 8.4) is defined as follows ... [Pg.192]

The Bayes rule simply states that a sample or object should be assigned to that group having the highest conditional probability and application of this rule to parametric classification schemes provides optimum discriminating capability. An explanation of the term conditional probability is perhaps in order here. [Pg.127]

Expressed mathematically, therefore, and applying Bayes rule, a sample is assigned to Group A, G(A), on the condition that. [Pg.129]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes. [Pg.142]

First, Bayes rule is applied which yields... [Pg.34]

To characterize the relation between events Sk and Sj, the conditional probability o/Sk occurring under the condition that Sj is known to have occurred, is introduced. This quantity is defined [5, p.25] by the Bayes rule which reads ... [Pg.13]

The classification rule in conjunction with Bayes rule is used [126, 36] so that the posterior probability (Eq. 3.38) assuming F(7rfc x) = 1 that the class membership of the observation xq is T This assumption may lead to a situation where the observation will be classified wrongly to one of the fault cases which were used to develop the FDA discriminant when an unknown fault occurs. Chiang et al. [36] proposed several screening procedures to detect unknown faults. One of them involves FDA related T -statistic before applying Eq. 3.59 as... [Pg.57]

Bayesian statistics Bayesian inference is a variant of statistics where prior information is allowed to influence the posterior probability of an event via application of Bayes rule. Complex problems of cheminformatics and bioinformatics often benefit from Bayesian models. A schism divides statisticians from Bayesians. [Pg.748]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes, but there has been much work devoted to the development of simpler linear classification rules. One such method, which has featured extensively in spectroscopic pattern recognition studies, is the perceptron algorithm. [Pg.148]

Bayesian inference is a well-defined procedure for inferring the probability (P ) that a hypothesis (77 ) is true, from evidence (Ej) linking the hypothesis to other observed states of the world. The approach makes use of Bayes rule to combine the various sources of evidence (Savage 1954). Bayes rule... [Pg.2184]

As discussed further in Section 4.1, people often fail to combine evidence consistently with the above predictions of Bayes rule. A common finding is that people fail to adequately consider the base rate of the hypothesis. In the above example, this would correspond to focusing on / (positive... [Pg.2184]

When the evidence Ej consists of multiple states Ei,, E , each of which is conditionally independent, Bayes rule can be expanded into the expression ... [Pg.2185]

Calculating P(Ej) can be somewhat difficult, due to the fact that each piece of evidence must be dependent, or else it would not be related to the hypothesis. The odds forms of Bayes rule provides a convenient way of looking at the evidence for and against a hypothesis that doesn t require P(Ej) to be calculated. This results in the expression ... [Pg.2185]


See other pages where Bayes rule is mentioned: [Pg.330]    [Pg.318]    [Pg.769]    [Pg.137]    [Pg.83]    [Pg.10]    [Pg.45]    [Pg.46]    [Pg.130]    [Pg.196]    [Pg.284]    [Pg.377]    [Pg.509]    [Pg.140]    [Pg.209]    [Pg.134]    [Pg.145]    [Pg.162]    [Pg.424]    [Pg.2182]   
See also in sourсe #XX -- [ Pg.243 ]

See also in sourсe #XX -- [ Pg.10 , Pg.45 , Pg.46 , Pg.130 ]

See also in sourсe #XX -- [ Pg.545 ]

See also in sourсe #XX -- [ Pg.545 ]

See also in sourсe #XX -- [ Pg.152 ]




SEARCH



Bayes’s rule

© 2024 chempedia.info