Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian Inference from Posterior Random Sample

2 BAYESIAN INFERENCE FROM POSTERIOR RANDOM SAMPLE [Pg.54]

When we find a random sample from the posterior using one of the methods we developed in Chapter 2, we can use this sample as the basis for our inferences. We find the sample equivalent to what we calculated from the numerical posterior. [Pg.54]

If we want to use the posterior mean as our point estimator, we calculate the mean of the posterior sample instead of calculating the mean from the numerical posterior density. Similarly, if we want to use the posterior median as our point estimator, we find the median of the posterior sample instead of calculating the median from the numerical CDF. We find the value that has 50% of the sample above and 50% below it. [Pg.54]

The random sample from the posterior can be used to calculate an equal-tail credible interval. If we had the exact posterior, we would find the value 0i and such that P d i) = f and P 9 = f respectively. Since we are using the random [Pg.55]

Example 4 (continued) We calculate the equal tail 95% credible interval using the random sample from the unsealed posterior having shape given by [Pg.56]


In this section we introduce the main ideas of computational Bayesian statistics. We show how basing our inferences on a random sample from the posterior distribution has overcome the main impediments to using Bayesian methods. The first impediment is that the exact posterior cannot be found analytically except for a few special cases. The second is that finding the numerical posterior requires a difficult numerical integration, particularly when there is a large number of parameters. [Pg.19]

The overall goal of Bayesian inference is knowing the posterior. The fundamental idea behind nearly all statistical methods is that as the sample size increases, the distribution of a random sample from a population approaches the distribution of the population. Thus, the distribution of the random sample from the posterior will approach the true posterior distribution. Other inferences such as point and interval estimates of the parameters can be constructed from the posterior sample. For example, if we had a random sample from the posterior, any parameter could be estimated by the corresponding statistic calculated from that random sample. We could achieve any required level of accuracy for our estimates by making sure our random sample from the posterior is large enough. Existing exploratory data analysis (EDA) techniques can be used on the sample from the posterior to explore the relationships between parameters in the posterior. [Pg.20]

Chapter 3 compares Bayesian inferences drawn from a numerical posterior with Bayesian inferences from the posterior random sample. [Pg.21]

Bayesian Inference from a Posterior Random Sample... [Pg.273]

We will use the two-parameter case to show what happens when there are multiple parameters. The inference universe has at least four dimensions, so we cannot graph the surface on it. The likelihood function is still found by cutting through the surface with a hyperplane parallel to the parameter space passing through the observed values. The likelihood function will be defined on the the two parameter dimensions as the observations are fixed at the observed values and do not vary. We show the bivariate likelihood function in 3D perspective in Figure 1.8. In this example, we have the likelihood function where 9 is the mean and 62 is the variance for a random sample from a normal distribution. We will also use this same curve to illustrate the Bayesian posterior since it would be the joint posterior if we use independent flat priors for the two parameters. [Pg.12]

The computational approach to Bayesian statistics allows the posterior to be approached from a completely different direction. Instead of using the computer to calculate the posterior numerically, we use the computer to draw a Monte Carlo sample from the posterior. Fortunately, all we need to know is the shape of the posterior density, which is given by the prior times the likelihood. We do not need to know the scale factor necessary to make it the exact posterior density. These methods replace the very difficult numerical integration with the much easier process of drawing random samples. A Monte Carlo random sample from the posterior will approximate the true posterior when the sample size is large enough. We will base our inferences on the Monte Carlo random sample from the posterior, not from the numerically calculated posterior. Sometimes this approach to Bayesian inference is the only feasible method, particularly when the parameter space is high dimensional. [Pg.26]

When we have a random sample from the posterior instead of the exact numerical posterior, we do the Bayesian inferences using the analogous procedure on the posterior sample. [Pg.58]

A more realistic model when we have a random sample of observations from a normalifi, cr ) distribution is that both parameters are unknown. The parameter fi is usually the only parameter of interest, and is a nuisance parameter. We want to do inference on /x while taking into account the additional uncertainty caused by the unknown value of tr. The Bayesian approach allows us to do this by marginalizing out the nuisance parameter from the joint posterior of the parameters given the data. The joint posterior will be proportional to the joint prior times the joint likelihood. [Pg.80]


See other pages where Bayesian Inference from Posterior Random Sample is mentioned: [Pg.295]    [Pg.332]    [Pg.354]    [Pg.1]    [Pg.47]    [Pg.235]    [Pg.265]    [Pg.331]    [Pg.177]   


SEARCH



Bayesian

Bayesian inference

Bayesian posterior

Bayesians

Inference

Posterior

Random samples

Random sampling

Randomized samples

Samples random sample

© 2024 chempedia.info