Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian regression

Bayesian regression to improve the quality of the model. J. Comput.-Aided Mol. Des. 1998, 12, 503-519. [Pg.372]

Empirical Scoring Functions. II. The Testing of an Empirical Scoring Function for the Prediction of Ligand-Receptor Binding Affinities and the Use of Bayesian Regression to Improve the Quality of the Model. [Pg.57]

P. O. Maitre, M. Buhrer, D. Thomson, and D. R. Stanski, A three-step approach combining Bayesian regression and NONMEM population analysis application to midazolam. J Pharmacokinet Biopharm 19 377-384 (1991). [Pg.243]

Murray, C.W., Auton, T.R. and Eldridge, M.D. (1998) Empirical scoring functions. 11. The testing of an empirical scoring function for the prediction of ligand-receptor binding affinities and the use of Bayesian regression to improve the quality of the model./. Comput. Aid. Mol. Des., 12, 503-519. [Pg.1127]

In the second approach, a Bayesian regression analysis (e.g., Straub and Der Kiureghian 2008 loannou and Rossetto 2013) is adopted in order to take into accotmt prior information regarding the model s parameters, 0, especially when the available number of observations is small. Prior information is obtained from existing fragility functions or independent post-earthquake data of similar groups of assets. In addition, this... [Pg.984]

Referring to the situation in question 2, one might think that an informative prior would outweigh the effect of the increasing sample size. With respect to the Bayesian analysis of the linear regression, analyze the way in which the likelihood and an informative prior will compete for dominance in the posterior mean. [Pg.78]

West, 1984] West, M. (1984). Outlier models and prior distributions in Bayesian linear regression. Journal of the Royal Statistical Society, Series B, 46(3 431439. [Pg.567]

Figures 11 and 12 illustrate the performance of the pR2 compared with several of the currently popular criteria on a specific data set resulting from one of the drug hunting projects at Eli Lilly. This data set has IC50 values for 1289 molecules. There were 2317 descriptors (or covariates) and a multiple linear regression model was used with forward variable selection the linear model was trained on half the data (selected at random) and evaluated on the other (hold-out) half. The root mean squared error of prediction (RMSE) for the test hold-out set is minimized when the model has 21 parameters. Figure 11 shows the model size chosen by several criteria applied to the training set in a forward selection for example, the pR2 chose 22 descriptors, the Bayesian Information Criterion chose 49, Leave One Out cross-validation chose 308, the adjusted R2 chose 435, and the Akaike Information Criterion chose 512 descriptors in the model. Although the pR2 criterion selected considerably fewer descriptors than the other methods, it had the best prediction performance. Also, only pR2 and BIC had better prediction on the test data set than the null model. Figures 11 and 12 illustrate the performance of the pR2 compared with several of the currently popular criteria on a specific data set resulting from one of the drug hunting projects at Eli Lilly. This data set has IC50 values for 1289 molecules. There were 2317 descriptors (or covariates) and a multiple linear regression model was used with forward variable selection the linear model was trained on half the data (selected at random) and evaluated on the other (hold-out) half. The root mean squared error of prediction (RMSE) for the test hold-out set is minimized when the model has 21 parameters. Figure 11 shows the model size chosen by several criteria applied to the training set in a forward selection for example, the pR2 chose 22 descriptors, the Bayesian Information Criterion chose 49, Leave One Out cross-validation chose 308, the adjusted R2 chose 435, and the Akaike Information Criterion chose 512 descriptors in the model. Although the pR2 criterion selected considerably fewer descriptors than the other methods, it had the best prediction performance. Also, only pR2 and BIC had better prediction on the test data set than the null model.
The Bayesian approach is more than a tool for adjusting the results of the all subsets regression by adding appropriate effects to achieve effect heredity. Take, for example, the sixth model in Table 4 which consists of Al,Bl, AlDq, BlHl, BlHq, BqHq. The AlDq effect identified as part of this model does not appear in the best subsets of size 1-6 in Table 3. The Bayesian procedure has therefore discovered an additional possible subset of effects that describes the data. [Pg.239]

Central to Bayesian approaches is the treatment of model parameters, such as the vector of regression coefficients (3, as random variables. Uncertainty and expert knowledge about these parameters are expressed via a prior distribution. The observed data give rise to a likelihood for the parameters. The likelihood and... [Pg.240]

Section 4 reviews simple, semi-automatic methods of choosing the hyperparameters of a prior distribution and adds some new insights into the choice of hyperparameters for a prior on regression coefficient vector (3. The glucose experiment and a simulated data set are used in Section 5 to demonstrate the application of the Bayesian subset selection technique. [Pg.241]

A Bayesian analysis proceeds by placing prior distributions on the regression coefficient vector (3, error standard deviation a, and subset indicator vector 6. One form of prior distribution is given in detail below and other approaches are then discussed. Techniques for choosing hyperparameters of prior distributions, such as the mean of a prior distribution, are discussed later in Section 4. [Pg.242]

Raftery, A. E., Madigan, D., and Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. Journal of the American Statistical Association, 92, 179-191. [Pg.267]


See other pages where Bayesian regression is mentioned: [Pg.194]    [Pg.104]    [Pg.450]    [Pg.29]    [Pg.194]    [Pg.372]    [Pg.435]    [Pg.218]    [Pg.64]    [Pg.570]    [Pg.2417]    [Pg.194]    [Pg.104]    [Pg.450]    [Pg.29]    [Pg.194]    [Pg.372]    [Pg.435]    [Pg.218]    [Pg.64]    [Pg.570]    [Pg.2417]    [Pg.670]    [Pg.675]    [Pg.138]    [Pg.462]    [Pg.484]    [Pg.219]    [Pg.219]    [Pg.528]    [Pg.276]    [Pg.51]    [Pg.674]    [Pg.94]    [Pg.235]    [Pg.240]    [Pg.253]    [Pg.261]    [Pg.145]    [Pg.149]    [Pg.182]    [Pg.1811]   
See also in sourсe #XX -- [ Pg.29 ]




SEARCH



Bayesian

Bayesian Regression Analysis

Bayesians

Computational Bayesian Approach to the Logistic Regression Model

Logistic regression model computational Bayesian approach

© 2024 chempedia.info