Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

MCMC

In the next subsection, I describe how the basic elements of Bayesian analysis are formulated mathematically. I also describe the methods for deriving posterior distributions from the model, either in terms of conjugate prior likelihood forms or in terms of simulation using Markov chain Monte Carlo (MCMC) methods. The utility of Bayesian methods has expanded greatly in recent years because of the development of MCMC methods and fast computers. I also describe the basics of hierarchical and mixture models. [Pg.322]

If draws can be made from the posterior distribution for each component conditional on values for the others, i.e., fromp(Q,i y, 6,- J, then this conditional posterior distribution can be used as the proposal distribution. In this case, the probability in Eq. (23) is always 1, and all draws are accepted. This is referred to as Gibbs sampling and is the most common form of MCMC used in statistical analysis. [Pg.327]

Mark-Houwink-Sakurada relationship, 1 309, 310t 20 439-440 Markov chain, 26 1006, 1018, 1024, 1025 HSTA algorithm and, 26 1030-1031 Markov chain Monte Carlo (MCMC) sampling method, 26 1017-1018 Markovnikov addition, in silicone network preparation, 22 563... [Pg.551]

M McMc cmaacis VjSCSc caniacls rMcSc CDiitacts... [Pg.197]

The approximation is applicable for ri and n2 greater than 5, and the comparisons we have conducted against inference based on MCMC methods have shown that this approximation works well for samples of size 6 or more. [Pg.132]

For a particular data set, subsets with high posterior probability must be identified. This can be a computational challenge with h possible effects, there are 2h different subsets. In order to identify promising subsets, MCMC methods for simulating from the posterior distribution on subsets may be used as a stochastic search. Section 3 outlines efficient techniques for exploring the subset space using MCMC methods. [Pg.241]

The focus of this section is on MCMC methods for sampling from p(6 Y), the posterior distribution of 6. In some cases, for example, in conjunction with the nonconjugate prior distribution (5), MCMC is also used to sample from p (3, a, F). [Pg.247]

Estimation of Posterior Probability on 6 Using MCMC Output... [Pg.248]

Here, the indicator function /(.) is 1 whenever its argument is true, and zero otherwise. A number of problems arise with the relative frequency estimate (15). First, it is prone to variability in the MCMC sample. Second, any model that is not in S has an estimated posterior probability of zero. Third, if the starting value 6° has very low posterior probability, it may take the Markov chain a large number of steps to move to 6 values that have high posterior probability. These initial burn-in values of 8 would have larger estimates of the posterior probability (15) than their actual posterior probability that is, the estimate p(8 Y) will be biased because of the burn-in. For example, with the simulated data described in Section 4.2, the first 100 draws of 6 have almost zero posterior probability. In a run of K = 1000... [Pg.248]

The use of (17) in marginal posterior probability (18) corresponds to the posterior probability that A is active, conditional on the models visited by the MCMC run. [Pg.249]

In some situations, such as when estimates of the posterior probability of a specific subset 6 or of groups of subsets are required, a second method of estimating C can be used. Notice that the estimate C in (17) will be biased upwards, because C = C in (17) only if U = V, the set of all possible values of 6. If U c V, then C < C. A better estimate of C can be obtained by a capture-recapture approach, as discussed by George and McCulloch (1997). Let the initial capture set A be a collection of 6 values identified before a run of the MCMC search that is, each element in the set A. is a particular subset. The recapture estimate of the probability of A is the relative frequency given by (15). The analytic expression (16) for the posterior probability of A is also available, and contains the unknown C. Let g(A) = J2seA8( ) so that p(A Y) = Cg(A). Then, by equating the two estimates, we have... [Pg.249]

Having seen that the observed frequency distribution of the sampler is useful in the estimation of the normalizing constant C, it is interesting to note that the frequencies are otherwise unused. Analytic expressions for the posterior probability are superior because they eliminate MCMC variability inherent in relative frequency estimates of posterior probabilities. In such a context, the main goal of the sampler is to visit as many different high probability models as possible. Visits to a model after the first add no value, because the model has already been identified for analytic evaluation of p(6 Y). [Pg.250]

In later sections, (17) is used everywhere, except when estimating the posterior probability of all models visited by the MCMC sampler. In this case, (19) is used. [Pg.250]

The choice of prior distributions discussed in Section 2.2 has an impact on both the ease of implementation of the MCMC algorithm, and its speed of execution. Specific issues include the number of linear algebra operations and the rate at which stochastic search methods can explore the space of all subsets. [Pg.250]

Computations are also more efficient if the MCMC algorithm can sample directly from the marginal posterior distribution p(6 Y), rather than from the joint posterior distribution p(6, /3, a Y). This efficiency occurs because fewer variables are being sampled. As mentioned at the end of Section 3.1, the marginal posterior distribution p(6 Y) is available in closed form when conjugate prior distributions on /3, as in (4) or (6), are used. [Pg.250]

The main results of the analysis of the glucose data have already been presented in Section 1.1. Details of the hyperparameter choices, robustness calculations, and estimation of the total probability visited by the MCMC sampler are given here. [Pg.262]

A single run of the MCMC sampler was used, with 2500 iterations. The posterior probabilities of the models listed in Table 4 are normalized sothat all subsets visited have total probability 1.0 of being active that is, estimate C from (17) is used in conjunction with analytic expression (16) for the posterior probability on 6. [Pg.262]

Spiegelhalter D.J., Best N. G., Gilks W. R., Inskip H. (1995c). Hepatitis a case study in MCMC methods. In Markov chain Monte Carlo Methods in practice. (ed. W. R. Gilks, S. Richardson, and D. J. Spiegelhalter), pp. 21-43. Chapman and Hall, New York. [Pg.328]

Hence, (A) can be rewritten as a Boltzmann average over all states j of the quantity As before we can use normal (e.g. Metropolis) MCMC... [Pg.131]


See other pages where MCMC is mentioned: [Pg.132]    [Pg.80]    [Pg.126]    [Pg.226]    [Pg.137]    [Pg.140]    [Pg.93]    [Pg.98]    [Pg.112]    [Pg.113]    [Pg.12]    [Pg.571]    [Pg.202]    [Pg.240]    [Pg.246]    [Pg.247]    [Pg.247]    [Pg.248]    [Pg.248]    [Pg.249]    [Pg.262]    [Pg.264]    [Pg.2951]    [Pg.129]    [Pg.133]    [Pg.133]    [Pg.134]   


SEARCH



MCMC simulation (

MCMC techniques

© 2024 chempedia.info