Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian inference

Two very different approaches to inferential statistics exist the classical or fre-quentist approach and the Bayesian approach. Each approach is used to draw conclusions (or inferences) regarding the magnitude of some unknown quantity, such as the intercept and slope of a dose-response model. The key difference between classical [Pg.132]

In this section, the sample information was the raw toxicity test data for each species and test. Prior information was not available outside the data set, so vague prior information was used as a basis for implementing the procedures. [Pg.133]

In science and engineering problems, there are various uncertain parameters necessary to be determined for modeling and other purposes. The Bayes theorem offers the possibility for inferencing uncertain models/systems from their measurements. There are two levels of system identification. The first is parametric identiflcation, in which a class of mathematical models for a particular physical phenomenon or system is given with unknown parameters to be identified. The second level deals with the selection of a suitable class of mathematical models for parametric identification. This is significantly more difficult but more important than the first level since parametric identification results will be by no means meaningful if one fails to obtain a suitable class of models. However, due to the difficulty of this problem, it is usually determined by user s judgement. Chapters 2-5 focus on parametric identification and Chapter 6 addresses the problem of model class selection. [Pg.20]

Use V to denote the measured data of a system and consider it as the vector f in Equation (2.18). Then, the updated/posterior probability density function (PDF) of the parameters 0 is  [Pg.21]

The likelihood function piT 0, C) represents the contribution of the measured data in establishing the posterior distribution. It reflects how likely the measurements are observed from the model with a particular set of parameters. The likelihood function can be constructed given the class of probabilistic and physical models of the problem and it is the key of Bayesian updating. If a large amount of measurement is available, the likelihood function will be the dominant factor for the Bayesian inference. [Pg.21]

The prior distribution p 0 C) denotes the prior information of the parameters and it is based on previous knowledge or user s judgement. In some applications, the prior distribution is treated as a constant and it is absorbed into the normalizing constant but this type of prior distribution does not satisfy the property of the PDF that its integral throughout the parametric space is unity. In general, a prior distribution that does not satisfy this property is referred to as an improper prior. Using a constant improper prior distribution yields the maximum likelihood solution. [Pg.21]

Another popular choice is the class of conjugate prior distributions. A prior distribution is said to be conjugate to a class of likelihood functions p D 0, C) if the resulting posterior distributions p 0 T , C) are in the same family as the prior distribution [214]. For example, let s say the likelihood function has the form of exponential distribution (of x)  [Pg.21]


Probability in Bayesian inference is interpreted as the degree of belief in the truth of a statement. The belief must be predicated on whatever knowledge of the system we possess. That is, probability is always conditional, p(X l), where X is a hypothesis, a statement, the result of an experiment, etc., and I is any information we have on the system. Bayesian probability statements are constructed to be consistent with common sense. This can often be expressed in tenns of a fair bet. As an example, I might say that the probability that it will rain tomorrow is 75%. This can be expressed as a bet I will bet 3 that it will rain tomorrow, if you give me 4 if it does and nothing if it does not. (If I bet 3 on 4 such days, I have spent 12 I expect to win back 4 on 3 of those days, or 12). [Pg.314]

There are two central rules of probability theory on which Bayesian inference is based [30] ... [Pg.315]

For Bayesian inference, we are seeking the probability of a hypothesis H given the data D. This probability is denotedp(H D). It is very likely that we will want to compare different hypotheses, so we may want to compare p(Hi D) with p(H2 D). Because it is difficult to write down an expression forp(H D), we use Bayes rule to invert the probability of p(D H) to obtain an expression for p(H D) ... [Pg.315]

Another aspect in which Bayesian methods perform better than frequentist methods is in the treatment of nuisance parameters. Quite often there will be more than one parameter in the model but only one of the parameters is of interest. The other parameter is a nuisance parameter. If the parameter of interest is 6 and the nuisance parameter is ( ), then Bayesian inference on 6 alone can be achieved by integrating the posterior distribution over ( ). The marginal probability of 6 is therefore... [Pg.322]

Unfortunately, some authors describing their work as Bayesian inference or Bayesian statistics have not, in fact, used Bayesian statistics rather, they used Bayes rule to calculate various probabilities of one observed variable conditional upon another. Their work turns out to comprise derivations of informative prior distributions, usually of the form piQi, 02,..., 0 1 = which is interpreted as the posterior distribution... [Pg.338]

TJ Loredo. In PE Eougere, ed. Erom Laplace to Supernova SN 1987A Bayesian Inference m Astrophysics. Dordrecht, The Netherlands Kluwer, 1990, pp 81-142. [Pg.345]

TJ Loredo. In ED Eeigelson, GI Babu, eds. The Promise of Bayesian Inference for Astrophysics. New York Sprmger-Verlag, 1992, pp 275-297. [Pg.345]

A Zellner. An Introduction to Bayesian Inference in Econometrics. New York Wiley, 197L I Zhu, IS Em, CE Lawrence. Bayesian adaptive sequence alignment algorithms. Bioinformat-ics 14 25 -39, 1998. [Pg.345]

IS Lm, CE Lawrence. Bayesian inference on biopolymer models. Biomformatics 15 38-52, 1999. [Pg.347]

These considerations raise a question how can we determine the optimal value of n and the coefficients i < n in (2.54) and (2.56) Clearly, if the expansion is truncated too early, some terms that contribute importantly to Po(AU) will be lost. On the other hand, terms above some threshold carry no information, and, instead, only add statistical noise to the probability distribution. One solution to this problem is to use physical intuition [40]. Perhaps a better approach is that based on the maximum likelihood (ML) method, in which we determine the maximum number of terms supported by the provided information. For the expansion in (2.54), calculating the number of Gaussian functions, their mean values and variances using ML is a standard problem solved in many textbooks on Bayesian inference [43]. For the expansion in (2.56), the ML solution for n and o, also exists, lust like in the case of the multistate Gaussian model, this equation appears to improve the free energy estimates considerably when P0(AU) is a broad function. [Pg.65]

Keeping the lesson of the above example in mind, we will explore three different dynamical possibilities below isolated evolution, where the system evolves without any coupling to the external world, unconditioned open ev olution, where the system evolves coupled to an external environment but where no information regarding the system is extracted from the environment, and conditioned open evolution where such information is extracted. In the third case, the evolution of the physical state is driven by the system evolution, the coupling to the external world, and by the fact that observational information regarding the state has been obtained. This last aspect - system evolution conditioned on the measurement results via Bayesian inference - leads to an intrinsically nonlinear evolution for the system state. The conditioned evolution provides, in principle, the most realistic possible description of an experiment. To the extent that quantum and classical mechanics are eventually just methodological tools to explain and predict the results of experiments, this is the proper context in which to compare them. [Pg.54]

Post-consumer scrap, 21 AOS Post-consumer solid waste, 21 362. See also Municipal solid waste (MSW) Post-curing, of ethylene-acrylic elastomers, 10 101-102 Post-die processing, 10 119 Posterior distribution, in Bayesian inference, 26 1017... [Pg.750]

Box, G.E.P. and Tiao, G.C. (1973). Bayesian Inference in Statistical Analysis. Addison-Wesley, Reading, MA. [Pg.965]

Qian SS, Stow CA, Borsuk ME. 2003. On Monte Carlo methods for Bayesian inference. Ecol. Model 159 269-277. [Pg.68]

Box GEP, Tiao GC. 1973. Bayesian inference in statistical analysis. New York Wiley... [Pg.86]

Loredo TJ. 1990. Prom Laplace to Supernova SN 1987A A Bayesian inference in astrophysics. In Pougere PE, editor. Maximum entropy and Bayesian methods. Dordrecht (DE) Kluwer. [Pg.86]

Fig. 5.2. Phylogeny of monopisthocotylean Monogenea based on SSU rDNA. The tree topology is from a Bayesian analysis with nodal support indicated, from top to bottom, for maximum likelihood (bootstrap%, n = 100), maximum parsimony (bootstrap%, n = 1000) and Bayesian inference (posterior probabilities). Figure from Matejusova etal. (2003). Fig. 5.2. Phylogeny of monopisthocotylean Monogenea based on SSU rDNA. The tree topology is from a Bayesian analysis with nodal support indicated, from top to bottom, for maximum likelihood (bootstrap%, n = 100), maximum parsimony (bootstrap%, n = 1000) and Bayesian inference (posterior probabilities). Figure from Matejusova etal. (2003).
Pope and Rayner, 1994] Pope, K. J. and Rayner, P. J. W. (1994). Non-linear System Identification using Bayesian Inference. Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pages 457M60. [Pg.558]

Expand modeling approaches and case examples in which nonsteady-state biomonitoring data are simulated to explore the exposure conditions responsible for biomonitoring results this may provide exposure estimates that can be used in risk assessment (for example, Bayesian inference techniques and population behavior-exposure models). [Pg.218]

Huelsenbeck JP, Ronquist F (2001) MRBAYES Bayesian inference of phylogenetic trees. Bioinformatics 17 754-755... [Pg.235]

Dellaportas, P. and Smith, A. F. M. (1993). Bayesian inference for generalized linear and proportional hazards models via Gibbs sampling. Applied Statistics, 42, 443 159. [Pg.266]

O Hagan, A. and Forster, J. J. (2004). Kendall s Advanced Theory of Statistics, volume 2B Bayesian Inference, second edition. Arnold, London. [Pg.266]

Lyons M, Yang RSH, Mayeno AN, Reisfeld B. 2008. Computational toxicology of chloroform reverse dosimetry using Bayesian inference, Markov chain Monte Carlo simulation, and human biomonitoring data. Environ Health Perspect 116 1040-1046. [Pg.251]


See other pages where Bayesian inference is mentioned: [Pg.314]    [Pg.320]    [Pg.322]    [Pg.365]    [Pg.90]    [Pg.121]    [Pg.521]    [Pg.760]    [Pg.153]    [Pg.276]    [Pg.132]    [Pg.133]    [Pg.165]    [Pg.218]    [Pg.61]    [Pg.199]    [Pg.297]    [Pg.297]    [Pg.442]    [Pg.13]   
See also in sourсe #XX -- [ Pg.365 ]

See also in sourсe #XX -- [ Pg.9 ]

See also in sourсe #XX -- [ Pg.14 ]

See also in sourсe #XX -- [ Pg.415 ]

See also in sourсe #XX -- [ Pg.450 ]




SEARCH



Bayesian

Bayesian Inference from Posterior Random Sample

Bayesian Inference from the Numerical Posterior

Bayesians

Examples of Bayesian Inference

Inference

The Bayesian view of statistical inference

© 2024 chempedia.info