Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian posterior distribution

A similar formalism is used by Thompson and Goldstein [90] to predict residue accessibilities. What they derive would be a very useful prior distribution based on multiplying out independent probabilities to which data could be added to form a Bayesian posterior distribution. The work of Arnold et al. [87] is also not Bayesian statistics but rather the calculation of conditional distributions based on the simple counting argument that p(G r) = p(a, r)lp(r), where a is some property of interest (secondary structure, accessibility) and r is the amino acid type or some property of the amino acid type (hydro-phobicity) or of an amino acid segment (helical moment, etc). [Pg.339]

Another aspect in which Bayesian methods perform better than frequentist methods is in the treatment of nuisance parameters. Quite often there will be more than one parameter in the model but only one of the parameters is of interest. The other parameter is a nuisance parameter. If the parameter of interest is 6 and the nuisance parameter is ( ), then Bayesian inference on 6 alone can be achieved by integrating the posterior distribution over ( ). The marginal probability of 6 is therefore... [Pg.322]

In the next subsection, I describe how the basic elements of Bayesian analysis are formulated mathematically. I also describe the methods for deriving posterior distributions from the model, either in terms of conjugate prior likelihood forms or in terms of simulation using Markov chain Monte Carlo (MCMC) methods. The utility of Bayesian methods has expanded greatly in recent years because of the development of MCMC methods and fast computers. I also describe the basics of hierarchical and mixture models. [Pg.322]

There is some confusion in using Bayes rule on what are sometimes called explanatory variables. As an example, we can try to use Bayesian statistics to derive the probabilities of each secondary structure type for each amino acid type, that is p( x r), where J. is a, P, or Y (for coil) secondary strucmres and r is one of the 20 amino acids. It is tempting to writep( x r) = p(r x)p( x)lp(r) using Bayes rule. This expression is, of course, correct and can be used on PDB data to relate these probabilities. But this is not Bayesian statistics, which relate parameters that represent underlying properties with (limited) data that are manifestations of those parameters in some way. In this case, the parameters we are after are 0 i(r) = p( x r). The data from the PDB are in the form of counts for y i(r), the number of amino acids of type r in the PDB that have secondary structure J.. There are 60 such numbers (20 amino acid types X 3 secondary structure types). We then have for each amino acid type a Bayesian expression for the posterior distribution for the values of xiiry. [Pg.329]

The Bayesian alternative to fixed parameters is to define a probability distribution for the parameters and simulate the joint posterior distribution of the sequence alignment and the parameters with a suitable prior distribution. How can varying the similarity matrix... [Pg.332]

Zhu et al. [15] and Liu and Lawrence [61] formalized this argument with a Bayesian analysis. They are seeking a joint posterior probability for an alignment A, a choice of distance matrix 0, and a vector of gap parameters. A, given the data, i.e., the sequences to be aligned p(A, 0, A / i, R2). The Bayesian likelihood and prior for this posterior distribution is... [Pg.335]

Unfortunately, some authors describing their work as Bayesian inference or Bayesian statistics have not, in fact, used Bayesian statistics rather, they used Bayes rule to calculate various probabilities of one observed variable conditional upon another. Their work turns out to comprise derivations of informative prior distributions, usually of the form piQi, 02,..., 0 1 = which is interpreted as the posterior distribution... [Pg.338]

As an example of analysis of side-chain dihedral angles, the Bayesian analysis of methionine side-chain dihedrals is given in Table 3 for the ri = rotamers. In cases where there are a large number of data—for example, the (3, 3, 3) rotamer—the data and posterior distributions are essentially identical. These are normal distributions with the averages and standard variations given in the table. But in cases where there are few data. [Pg.341]

A number of issues arise in using the available data to estimate (he rates of location-dependent fire occurrence. These include the possible reduction in the frequency of fires due to increased awareness. Apostolakis and Kazarians (1980) use the data of Table 5.2-1 and Bayesian analysis to obtain the results in Table 5.2-2 using conjugate priors (Section 2.6.2), Since the data of Table 5.2-1 are binomially distributed, a gamma prior is used, with a and P being the parameters of the gamma prior as presented inspection 2.6.3.2. For example, in the cable- spreading room fromTable 5.2-2, the values of a and p (0.182 and 0.96) yield a mean frequency of 0.21, while the posterior distribution a and p (2.182 and 302,26) yields a mean frequency of 0.0072. [Pg.198]

The mean of tlie posterior distribution of Z is Bayesian estimate of the failure rate per year. If E(Z B) is tlie mean of the posterior distribution, then... [Pg.616]

The method for estimating parameters from Monte Carlo simulation, described in mathematical detail by Reilly and Duever (in preparation), uses a Bayesian approach to establish the posterior distribution for the parameters based on a Monte Carlo model. The numerical nature of the solution requires that the posterior distribution be handled in discretised form as an array in computer storage using the method of Reilly 2). The stochastic nature of Monte Carlo methods implies that output responses are predicted by the model with some amount of uncertainty for which the term "shimmer" as suggested by Andres (D.B. Chambers, SENES Consultants Limited, personal communication, 1985) has been adopted. The model for the uth of n experiments can be expressed by... [Pg.283]

Post-consumer scrap, 21 AOS Post-consumer solid waste, 21 362. See also Municipal solid waste (MSW) Post-curing, of ethylene-acrylic elastomers, 10 101-102 Post-die processing, 10 119 Posterior distribution, in Bayesian inference, 26 1017... [Pg.750]

Confidence intervals nsing freqnentist and Bayesian approaches have been compared for the normal distribntion with mean p and standard deviation o (Aldenberg and Jaworska 2000). In particnlar, data on species sensitivity to a toxicant was fitted to a normal distribntion to form the species sensitivity distribution (SSD). Fraction affected (FA) and the hazardons concentration (HC), i.e., percentiles and their confidence intervals, were analyzed. Lower and npper confidence limits were developed from t statistics to form 90% 2-sided classical confidence intervals. Bayesian treatment of the uncertainty of p and a of a presupposed normal distribution followed the approach of Box and Tiao (1973, chapter 2, section 2.4). Noninformative prior distributions for the parameters p and o specify the initial state of knowledge. These were constant c and l/o, respectively. Bayes theorem transforms the prior into the posterior distribution by the multiplication of the classic likelihood fnnction of the data and the joint prior distribution of the parameters, in this case p and o (Fignre 5.4). [Pg.83]

The Bayesian equivalent to the frequentist 90% confidence interval is delineated by the 5th and 95th percentiles of the posterior distribntion. Bayesian confidence intervals for SSD (Figures 5.4 to 5.5), 5th percentile, i.e., HC5 and fraction affected (Figures 5.4 to 5.6) were calculated from the posterior distribution. Thns, the nncer-tainties of both HC and FA are established in 1 consistent mathematical framework FA estimates at the logio HC lead to the intended protection percentage, i.e., M °(logio HCf) = p where p is a protection level. Further full distribution of HC and FA uncertainty can be very easily extracted from posterior distribntion for any level of protection and visualized (Figures 5.5 to 5.7). [Pg.83]

FIG U RE 5.4 Bayesian normal density spaghetti plot random sample of 100 normal probability density functions (pdfs) drawn from the posterior distribution of p and o, given 7 cadmium NOEC toxicity data (dots) from Aldenberg and Jaworska (2000). [Pg.84]

Credible interval In a Bayesian analysis, the area under the posterior distribution. Represents the degree of belief, including all past and current information,... [Pg.178]

Robust Bayes A school of thought among Bayesian analysts in which epistemic uncertainty about prior distributions or likelihood functions is quantified and projected through Bayes rule to obtain a class of posterior distributions. [Pg.182]

The cornerstone of Bayesian methods is Bayes Theorem, which was first published in 1763 (Box Tiao, 1973). Bayes Theorem provides a method for statistical inference in which a prior distribution, based upon subjective judgement, can be updated with empirical data, to create a posterior distribution that combines both judgement and data. As the sample size of the data becomes large, the posterior distribution will tend to converge to the same result that would be obtained with frequentist methods. In situations in which there are no relevant sample data, the analysis can be conducted based upon the prior distribution, without any updating. [Pg.57]

The Bayesian approach to subset selection is outlined in Sections 2 to 4. Section 2 gives the mathematical ingredients of the analysis a probability model for the data, prior distributions for the parameters (J3, a, 5) of the model, and the resultant posterior distribution. [Pg.241]

This model is augmented by an unobserved indicator vector 8. Each element 8j(j = 1,..., h) of 8 takes the value 0 or 1, indicating whether the corresponding P j belongs to an inactive or an active effect, respectively. Because the intercept Po is always present in the model, it has no corresponding 50 element. An inactive effect has Pj close to 0 and an active effect has Pj far from 0. The precise definition of active and inactive may vary according to the form of prior distribution specified. Under this formulation, the Bayesian subset selection problem becomes one of identifying a posterior distribution on 6. [Pg.242]

Bayesian techniques A method of training and evaluating neural networks that is based on a stochastic (probabilistic) approach. The basic idea is that weights have a distribution before training (a prior distribution) and another (posterior) distribution after training. Bayesian techniques have been applied successfully to multilayer perception networks. [Pg.163]

Sensitivity analysis is about asking how sensitive your model is to perturbations of assumptions in the underlying variables and structure. Models developed under any platform should be subject to some form of sensitivity analysis. Those constructed under a Bayesian framework may be subject to further sensitivity analysis associated with assumptions that may be made in the specihcation of the prior information. In general, therefore, a sensitivity analysis will involve some form of perturbation of the priors. There are generally scenarios where this may be important. First, the choice of a noninformative prior could lead to an improper posterior distribution that may be more informative than desired (see Gelman (18) for some discussion on this). Second, the use of informative priors for PK/PD analysis raises the issue of introduction of bias to the posterior parameter estimates for a specihed subject group that is, the prior information may not have been exchangeable with the current data. [Pg.152]

A point estimate of-2 LL (also termed the deviance (denoted D(9))) at its maximum is suggested to make the model lit appear better than it should in reality, and in a Bayesian sense averaging over the deviance values (for all values of the posterior distribution of the parameters) would provide a more appropriate choice and so the BIC, now BIC, can be written... [Pg.155]


See other pages where Bayesian posterior distribution is mentioned: [Pg.323]    [Pg.198]    [Pg.1616]    [Pg.345]    [Pg.235]    [Pg.323]    [Pg.198]    [Pg.1616]    [Pg.345]    [Pg.235]    [Pg.320]    [Pg.341]    [Pg.413]    [Pg.97]    [Pg.133]    [Pg.137]    [Pg.137]    [Pg.177]    [Pg.122]    [Pg.265]    [Pg.20]    [Pg.510]    [Pg.47]    [Pg.191]    [Pg.138]    [Pg.138]    [Pg.140]   
See also in sourсe #XX -- [ Pg.345 ]




SEARCH



Bayesian

Bayesian posterior

Bayesians

Posterior

Posterior distribution

© 2024 chempedia.info