Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Model selection Bayesian

Hugh Chipman is Associate Professor and Canada Research Chair in Mathematical Modelling in the Department of Mathematics and Statistics at Acadia University. His interests include the design and analysis of experiments, model selection, Bayesian methods, and data mining. [Pg.339]

Bayesian methods for subset selection offer several advantages over other approaches the assignment of posterior probabilities to different subsets of active effects provides a way of characterizing uncertainty about effect activity prior distributions can incorporate principles of effect dependence, such as effect heredity the identification of promising models via Bayesian stochastic search techniques is faster than all subsets searches, and more comprehensive than stepwise methods. [Pg.240]

Chipman, H. A., George, E. I., and McCulloch, R. E. (2001). The practical implementation of Bayesian model selection. In Model Selection. Editor R Lahiri, pages 65-116. Volume 38 of IMS Lecture Notes—Monograph Series, Institute of Mathematical Statistics, Beachwood. [Pg.266]

George, E. I., McCulloch, R. E., and Tsay, R. S. (1995). Two approaches to Bayesian model selection with applications. In Bayesian Analysis in Statistics and Econometrics Essays in Honor of Arnold Zellner. Editors D. A. Berry, K. A. Chaloner, and J. K. Geweke, pages 339-348. John Wiley and Sons, New York. [Pg.266]

A common form of model selection is to maximize the likelihood that the data arose under the model. For non-Bayesian analysis this is the basis of the likelihood ratio test, where the difference of two -2LL (where LL denotes the log-Ukelihood) for nested models is assumed to be approximately asymptotically chi-squared distributed. A Bayesian approach— see also the Schwarz criterion (36)—is based on computation of the Bayesian information criterion (BIC), which minimizes the KuUback-Leibler KL) information (37). The KL information relates to the ratio of the distribution of the data given the model and parameters to the underlying true distribution of the data. The similarity of the KL information expression (Eq. (5.24)) and Bayes s formula (Eq. (5.1)) is easily seen ... [Pg.154]

In contrast to the hypothesis testing style of model selection/discrimination, the posterior predictive check (PPC) assesses the predictive performance of the model. This approach allows the user to reformulate the model selection decision to be based on how well the model performs. This approach has been described in detail by Gelman et al. (27) and is only briefly discussed here. PPC has been assessed for PK analysis in a non-Bayesian framework by Yano et al. (40). Yano and colleagues also provide a detailed assessment of the choice of test statistics. The more commonly used test statistic is a local feature of the data that has some importance for model predictions for example, the maximum or minimum concentration might be important for side effects or therapeutic success (see Duffull et al. (6)) and hence constitutes a feature of the data that the model would do well to describe accurately. The PPC can be defined along the fines that posterior refers to conditioning of the distribution of the parameters on the observed values of the data, predictive refers to the distribution of future unobserved quantities, and check refers to how well the predictions reflect the observations (41). This method is used to answer the question Does the observed data look plausible under the posterior distribution This method is therefore solely a check of internal consistency of the model in question. [Pg.156]

In this expression a common residual variance term is specified, although the residual variance can be indexed to the model, in which case the overall residual variance will be the sum of the contribution of the residual variance for each of the m candidate models. It has been found that chain mixing occurs faster when competing models are linked with a common parameter (e.g., the residual error) (42). It is common in the non-Bayesian model framework to address model selection as a... [Pg.158]

S. P. Riley, Pharmacokinetic model selection within a population analysis using NONMEM and WinBUGS, in AAPS Workshop on Bayesian Primer. AAPS, Sait Lake City, UT, 2003. [Pg.164]

With GAM the data (covariate and individual Bayesian PM parameter estimates) would be subjected to a stepwise (single-term addition/deletion) modeling procedure. Each covariate is allowed to enter the model in any of several functional representations. The Akaike information criterion (AIC) is used as the model selection criterion (22). At each step, the model is changed by addition or deletion of a covariate that results in the largest decrease in the AIC. The search is stopped when the AIC reached a minimum value. [Pg.389]

There are three schools of thought on selection of models frequentist, information-theoretic, and Bayesian. The Bayesian paradigm for model selection has not yet penetrated the pharmacokinetics arena and as such will not be discussed here. The reader is referred to Hoeting et al., (1999) for a useful review paper. Frequen-tists rely on using probability to select a model and model development under this approach looks like the branches of a tree. A base model is developed and then one or more alternative models are developed. The alternative models are then compared to the base model and if one of the alternative models is statistically a... [Pg.22]

To summarize, in the Bayesian approach to model selection, the model classes are ranked according to p V Cj)P(Cj U) for 7 = 1,2,..., Nc, where the most plausible class of models representing the system is the one which gives the largest value of this quantity. The evidence p V Cj) can be calculated for each class of models using Equation (6.11) where the likelihood p V 9, Cj) is evaluated using the methods presented in Chapters 2-5. The prior distribution P Cj U) over all the model classes Cj, j = 1,2,..., Nc, can be used for other concerns, such as computational demand. However, it is out of the scope of this book and uniform prior plausibilities are chosen, leaving the Ockham factor alone to penalize the model classes. [Pg.223]

Note that the first four model classes possess similar plausibility, implying that the Bayesian model selection method does not have a strong preference on the most plausible model class. This is in contrast to the previous case in the Tangshan region, in which the plausibility of the optimal model class is over 0.7. With the data of Xinjiang, a multi-model predictive formula can be used as follows ... [Pg.247]

Beck, J. L. and Yuen, K.-V. Model selection using response measurements Bayesian probabilistic approach. Journal of Engineering Mechanics (ASCE) 130(2) (2004), 192-203. [Pg.280]

Ouyang Z, Clyde MA, Wolpert RL (2008) Bayesian kernel regression and elassification, bayesian model selection and objective methods. Gainesville, NC... [Pg.193]

When fitting models, the MLE is used to find the optimal fit to the dataset. However, maximizing the log likelihood often results in fitting noise and parameter estimates that are unstable, particularly when the data set is relatively small. This is because MLE trusts too much the observed trends in the, often limited, data (Moons et al., 2004). In order to avoid possible over-fitting, the Bayesian Information Criterion (BIC) was utilized (Schwarz, 1978). BIC is a criterion for model selection, and includes a penalty term for the number of parameters in the model. The BIC is given by the following equation ... [Pg.1509]

Beck JL, Katafygiotis LS (1998) Updating models and their uncertainties. I Bayesian statistical framework. J Eng Mech (ASCE) 124(4) 455 61 Beck JL, Yuen KV (2004) Model selection using response measurements Bayesian probabilistic approach. J Eng Mech (ASCE) 130(2) 192-203 Ching J, Chen YC (2007) Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection and model averaging. J Eng Mech (ASCE) 133(7) 816-832 Durovic ZM, Kovacevic BD (1999) Robust estimation with unknown noise statistics. IEEE Trans Automat Control 44(6) 1292-1296... [Pg.32]

Beck J, Yuen K-V (2004) Model selection using response measurtanents Bayesian probabilistic approach. ASCE J Eng Mech 130(2) 192-203... [Pg.1530]

Elicitation of jndgment may be involved in the selection of a prior distribution for Bayesian analysis. However, particularly because of developments in Bayesian computing, Bayesian modeling may be useful in data-rich situations. In those situations the priors may contain little prior information and may be chosen in such a way that the results will be dominated by the data rather than by the prior. The results may be acceptable from a frequentist viewpoint, if not actually identical to some frequentist results. [Pg.49]

One way to develop an in silica tool to predictive promiscuity is to apply a NB classifier for modeling, a technique that compares the frequencies of features between selective and promiscuous sets of compounds. Bayesian classification was applied in many studies and was recently compared to other machine-learning techniques [26, 27, 43, 51, 52]. [Pg.307]


See other pages where Model selection Bayesian is mentioned: [Pg.112]    [Pg.280]    [Pg.188]    [Pg.396]    [Pg.63]    [Pg.153]    [Pg.161]    [Pg.215]    [Pg.21]    [Pg.268]    [Pg.245]    [Pg.252]    [Pg.176]    [Pg.454]    [Pg.460]    [Pg.505]    [Pg.88]    [Pg.336]    [Pg.314]    [Pg.234]    [Pg.21]    [Pg.120]    [Pg.95]    [Pg.305]    [Pg.674]    [Pg.103]    [Pg.270]    [Pg.35]    [Pg.258]    [Pg.34]   


SEARCH



Bayesian

Bayesian Model Class Selection

Bayesian modeling

Bayesian models

Bayesians

Model selection

Modeling selecting models

© 2024 chempedia.info