Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Distribution prior

The dashed line gives the posterior distribution of 0iej, the solid line gives the prior distribution, and the dotted line gives the likelihood of the data. The figure shows that the posterior is a compromise between the prior and the likelihood. We can integrate the posterior distribution to decide the amount of the bet from... [Pg.317]

Setting up the probability model for the data and parameters of the system under smdy. This entails defining prior distributions for all relevant parameters and a likelihood function for the data given the parameters. [Pg.322]

An informative conjugate prior distribution can be formulated in tenns of a beta distribution ... [Pg.323]

Again, the use of the conjugate prior distribution results in the analytical form of the posterior distribution and therefore also simple expressions for the expectation values for the 6 , their variances, covariances, and modes ... [Pg.324]

A noninformative prior distribution could be formed by setting each X to 1. [Pg.324]

The normal model can take a variety of forms depending on the choice of noninformative or infonnative prior distributions and on whether the variance is assumed to be a constant or is given its own prior distribution. And of course, the data could represent a single variable or could be multidimensional. Rather than describing each of the possible combinations, I give only the univariate normal case with informative priors on both the mean and variance. In this case, the likelihood for data y given the values of the parameters that comprise 6, J. (the mean), and G (the variance) is given by the familiar exponential... [Pg.325]

This expression is valid for a single observation y. For multiple observations, we derive p(y Q) from the fact that p(y i, O-) = Ilip yi i, G-). The result is that the likelihood is also normal with the average value of y, y, substituted for y and o -hi substituted for G in Eq. (14). The conjugate prior distribution for Eq. (14) is... [Pg.325]

Because we are dealing with count data and proportions for the values qi, the appropriate conjugate prior distribution for the q s is the Dirichlet distribution,... [Pg.328]

Step 1. From a histogram of the data, partition the data into N components, each roughly corresponding to a mode of the data distribution. This defines the Cj. Set the parameters for prior distributions on the 6 parameters that are conjugate to the likelihoods. For the normal distribution the priors are defined in Eq. (15), so the full prior for the n components is... [Pg.328]

A prior distribution for sequence profiles can be derived from mixtures of Dirichlet distributions [16,51-54]. The idea is simple Each position in a multiple alignment represents one of a limited number of possible distributions that reflect the important physical forces that determine protein structure and function. In certain core positions, we expect to get a distribution restricted to Val, He, Met, and Leu. Other core positions may include these amino acids plus the large hydrophobic aromatic amino acids Phe and Trp. There will also be positions that are completely conserved, including catalytic residues (often Lys, GIu, Asp, Arg, Ser, and other polar amino acids) and Gly and Pro residues that are important in achieving certain backbone conformations in coil regions. Cys residues that form disulfide bonds or coordinate metal ions are also usually well conserved. [Pg.330]

A prior distribution of the probabilities of the 20 amino acids at a particular position in a multiple alignment can be represented by a Dirichlet distribution, described in Section lI.E. That is, it is an expression of the values of the probabilities of each residue type r, where r ranges from 1 to 20, and E( i0,. = 1 ... [Pg.330]

CX.0 = Z(=iCx.r represents the total number of counts that the prior distribution represents, and the a, the counts for each type of amino acid (not necessarily integers). Because different distributions will occur in multiple sequence alignments, the prior distribution for any position should be represented as a mixture of N Dirichlet distributions ... [Pg.331]

The Bayesian alternative to fixed parameters is to define a probability distribution for the parameters and simulate the joint posterior distribution of the sequence alignment and the parameters with a suitable prior distribution. How can varying the similarity matrix... [Pg.332]

Unfortunately, some authors describing their work as Bayesian inference or Bayesian statistics have not, in fact, used Bayesian statistics rather, they used Bayes rule to calculate various probabilities of one observed variable conditional upon another. Their work turns out to comprise derivations of informative prior distributions, usually of the form piQi, 02,..., 0 1 = which is interpreted as the posterior distribution... [Pg.338]

Thompson and Goldstein [89] improve on the calculations of Stolorz et al. by including the secondary structure of the entire window rather than just a central position and then sum over all secondary strucmre segment types with a particular secondary structure at the central position to achieve a prediction for this position. They also use information from multiple sequence alignments of proteins to improve secondary structure prediction. They use Bayes rule to fonnulate expressions for the probability of secondary structures, given a multiple alignment. Their work describes what is essentially a sophisticated prior distribution for 6 i(X), where X is a matrix of residue counts in a multiple alignment in a window about a central position. The PDB data are used to form this prior, which is used as the predictive distribution. No posterior is calculated with posterior = prior X likelihood. [Pg.339]

A similar formalism is used by Thompson and Goldstein [90] to predict residue accessibilities. What they derive would be a very useful prior distribution based on multiplying out independent probabilities to which data could be added to form a Bayesian posterior distribution. The work of Arnold et al. [87] is also not Bayesian statistics but rather the calculation of conditional distributions based on the simple counting argument that p(G r) = p(a, r)lp(r), where a is some property of interest (secondary structure, accessibility) and r is the amino acid type or some property of the amino acid type (hydro-phobicity) or of an amino acid segment (helical moment, etc). [Pg.339]

The varianee for any set of data ean be ealeulated without referenee to the prior distribution as diseussed in Appendix I. It follows that the varianee equation is also independent of a prior distribution. Here it is assumed that in all the eases the output funetion is adequately represented by the Normal distribution when the random variables involved are all represented by the Normal distribution. The assumption that the output funetion is robustly Normal in all eases does not strietly apply, partieularly when variables are in eertain eombination or when the Lognormal distribution is used. See Haugen (1980), Shigley and Misehke (1996) and Siddal (1983) for guidanee on using the varianee equation. [Pg.152]

There are two special cases for which equations 2.6-7 and 2.6-8 are easily solved to fold a prior distribution with the update distribution to obtain a posterior distribution with the sarai rm as the prior distribution. These distributions are the Bayes conjugates shown in Table 2.6-1. [Pg.51]

The Bayes conjugate is the gamma prior distribution (equation 2.6-11). When equations 2.6-9 and... [Pg.52]

Mosleh, Kazarians, and Gekler obtained a Bayesian estimate of the failure rate, Z, of a coolant recycle pump in llie hazard/risk study of a chemical plant. The estimate was based on evidence of no failures in 10 years of operation. Nuclear industry experience with pumps of similar types was used to establish tire prior distribution of Z. Tliis experience indicated tliat tire 5 and 95 percentiles of lire failure rate distribution developed for tliis category were 2.0 x 10" per hour (about one failure per 57 years of operation) and 98.3 x 10 per hour (about one failure per year). Extensive experience in other industries suggested the use of a log-nonnal distribution witli tlie 5 and 95 percentile values as llie prior distribution of Z, tlie failure rate of the coolant recycle pump. [Pg.614]

Substituting tlie values of [x and a for tlie parameters in tlie log-nomial pdf in Eq. (20.5.13) produces the following pdf of the prior distribution of Z... [Pg.615]

Comparison of the 5" and 95 percentiles of the posterior distribution of Z with the 5 and 95 percentiles of the prior distribution of Z indicates tliat the posterior pdf lies to the left of the prior pdf. Therefore, the posterior pdf assigns higher probability to intervals in the lower part of the range of z than the prior pdf. Tliis reflects the influence of tlie observed occurrence of no failures in 10 years. [Pg.616]

Tlie number of defective items in a sample of size n produced by a certain machine has a binomial distribution with parameters n and p, where p is the probability tliat an item produced is defective. For die case of 2 observed defectives in a sample of size 20, obtain die Bayesian estimate of p if die prior distribution of p is specified by the pdf... [Pg.636]

Suppose that tlie prior distribution of 0 lias tlie pdf... [Pg.637]

It is worthwhile noting that Box and Draper (1965) arrived at the same determinant criterion following a Bayesian argument and assuming that is unknown and that the prior distribution of the parameters is noninformative. [Pg.19]

If the measurement noise is normally distributed and independent across the data set, and if we use a flat prior distribution, problem (9.60) become a nonlinear least squares problem and can be solved using the techniques discussed in Section 9.3. [Pg.198]

To alleviate these assumptions the maximum likelihood rectification (MLR) technique was proposed by Johnston and Kramer. This approach incorporates the prior distribution of the plant states, P x, into the data reconciliation process to obtain the maximum likely rectified states given the measurements. Mathematically the problem can be stated (Johnston and Kramer, 1995) as... [Pg.219]

One problem encountered in solving Eq. (11.12) is the modeling of the prior distribution P x. It is assumed that this distribution is not known in advance and must be calculated from historical data. Several methods for estimating the density function of a set of variables are presented in the literature. Among these methods are histograms, orthogonal estimators, kernel estimators, and elliptical basis function (EBF) estimators (see Silverman, 1986 Scott, 1992 Johnston and Kramer, 1994 Chen et al., 1996). A wavelet-based density estimation technique has been developed by Safavi et al. (1997) as an alternative and superior method to other common density estimation techniques. Johnston and Kramer (1998) have proposed the recursive state... [Pg.221]


See other pages where Distribution prior is mentioned: [Pg.320]    [Pg.321]    [Pg.321]    [Pg.323]    [Pg.324]    [Pg.324]    [Pg.330]    [Pg.337]    [Pg.340]    [Pg.341]    [Pg.341]    [Pg.341]    [Pg.413]    [Pg.413]    [Pg.670]    [Pg.226]    [Pg.43]    [Pg.496]    [Pg.499]    [Pg.73]    [Pg.760]   
See also in sourсe #XX -- [ Pg.179 , Pg.200 , Pg.202 ]

See also in sourсe #XX -- [ Pg.250 ]

See also in sourсe #XX -- [ Pg.179 , Pg.200 , Pg.202 ]

See also in sourсe #XX -- [ Pg.268 , Pg.269 , Pg.270 , Pg.273 , Pg.274 , Pg.275 ]

See also in sourсe #XX -- [ Pg.21 , Pg.36 , Pg.45 , Pg.214 , Pg.251 ]

See also in sourсe #XX -- [ Pg.404 ]

See also in sourсe #XX -- [ Pg.239 , Pg.262 ]




SEARCH



Binomial distribution conjugate prior

Exponential distribution conjugate prior

Gamma distribution conjugate prior

Normal distribution conjugate prior

Prior

Prior Distributions - Revisited

Prior distribution internal states

Prior distribution thermal-like

Prior distribution vibrational energy disposal

Prior distributions Information-theoretic analysis

Prior distributions Surprisal

Product energy distribution prior

Rotational product distribution prior

© 2024 chempedia.info