Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Discrimination of Rival Models by Posterior Probability

For model Mj. let p(Y Mj. a) denote the predictive probability density of prospective data Y predicted in the manner of Eq. (6.1-10). with a known. Bayes theorem then gives the posterior probability of model Mj conditional on given Y and cr as [Pg.112]

Introducing a vector 6je of estimable parameters for model Mj, and integrating Eq. (6.6-22) over the permitted range of Oj, gives the posterior probability of model Mj in the form [Pg.112]

We treat the prior density p 6je Mj) as uniform over the region of appreciable likelihood. This approximation, consistent with Eq. (6.1-13), reduces Eq. (6.6-24) to [Pg.112]

The posterior probability p Mj Y, a), being based on data, has a sampling distribution over conceptual replications of the experimental design. We require this distribution to have expectation p(Mj) whenever model Mj is true. With this sensible requirement, Eq. (6.6-25) yields [Pg.113]

The model with the largest posterior probability is normally preferred, but before acceptance it should be tested for goodness of fit, as in Section 6.5. GREGPLUS automatically performs this test and summarizes the results for all the candidate models if the variance cr been specified. [Pg.113]


See other pages where Discrimination of Rival Models by Posterior Probability is mentioned: [Pg.112]   


SEARCH



By Modeling

Model discrimination

Model discriminative

Posterior

Posterior probability

Probability model

Probability posterior probabilities

Rival

© 2024 chempedia.info