Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Bayesian view of statistical inference

The least-squares estimates for this model are computed easily. The design matrix X, the vector of measured responses y, and X y are [Pg.381]

1 is the sample mean for the wild-type strain, and 0ls, 2 is the valne of the sample mean of the mutant strain minus that of the wild-type strain, 120.16 — 113.41 = 6.75. But, is it still within the realm of plausibility that the true value of 62 is zero  [Pg.381]

In practice, it is insufficient merely to identify the values of the parameters that minimize the sum of squared errors. We need also to consider the accuracy of our estimates. Here, we address this topic with Bayesian statistics, which describes how oin uncertainty in the parameter values is changed by doing the experiments. Let us consider single-response regression of a model [Pg.381]

We have a particular set of Xmeasmed responses y e and wish to estimate the unknown parameter vector 0 edt and the statistical properties of the random error s edt. While the error may not be truly stochastic, we assume that it has the properties of a random variable, since presumably we have no practical way of predicting the error value in any single experiment. [Pg.381]

As statistics is based upon probability theory, it is helpfiil to cite again two possible means of defining probabilities. One way to think about probability - the frequentist [Pg.381]


R.A. Fisher s own views on inference are outlined in Fisher (1956) For a modern text covering frequentist approaches see Cox (2006) and for a mathematically elementary but otherwise profound outline of the Bayesian approach see Bindley (2006). Likelihood approaches are covered in Lindsey (1996), Royall (1999), Pawitan (2001) and the classic by Edwards (1992). Classic expositions of the fully subjective Bayesian view of probability are give by Savage (1954) and de Finetti (1974, 1975) and the case for its use in science is made by the philosophers Howson and Urbach (1993). A comparative overview of the various statistical systems is given by Barnett (1999). [Pg.53]

The method of maximum likelihood is the standard estimation procedure in statistical inference. Whether one looks at the inference problem from the point of view of classical repeated-sampling theory or Bayesian theory or straightforward likelihood theory, maximizing the likelihood emerges as the preferred procedure. There really is no dispute about this in regular estimation problems, and phylogenetic inference does seem to be unexceptional from a statistical point of view, even though it took a little while for the initial difficulties in the application of maximum likelihood to be sorted out. This was mainly done by Felsenstein (1968) and Thompson (1974) in their Ph.D. dissertations and subsequent publications. [Pg.186]


See other pages where The Bayesian view of statistical inference is mentioned: [Pg.381]    [Pg.381]    [Pg.383]    [Pg.385]    [Pg.387]    [Pg.381]    [Pg.381]    [Pg.383]    [Pg.385]    [Pg.387]    [Pg.320]    [Pg.321]    [Pg.104]   


SEARCH



Bayesian

Bayesian inference

Bayesian statistics

Bayesians

Inference

Statistics inference

© 2024 chempedia.info