Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian learning

J. Devillers (Ed), Neural Networks in QSAR and Drug Design, Academic Press, London, 1996 S. Haykin, Neural Networks, Macmillan, New York, 1994 R. Neal, Bayesian Learning for Neural Networks, Springer, New York, 1996. [Pg.606]

Inductive logic programming (ILP) is not a pharmacophore generation method by itself, but a subfield of the machine learning approach. In this field, other methods such as hidden Markov models, Bayesian learning, decision trees and logic programs are available. [Pg.44]

Foresee, ED. and Hagan, M. (1997) Gauss-Newton approximation to Bayesian learning, Proceedings of International Joint Conference on Neural Networks, 3 1930-5. [Pg.223]

Neal, R. M. (1996) Bayesian learning for neural networks, vol. 118, Springer-Verlag, Berlin. [Pg.365]

A common use of statistics in structural biology is as a tool for deriving predictive distributions of strucmral parameters based on sequence. The simplest of these are predictions of secondary structure and side-chain surface accessibility. Various algorithms that can learn from data and then make predictions have been used to predict secondary structure and surface accessibility, including ordinary statistics [79], infonnation theory [80], neural networks [81-86], and Bayesian methods [87-89]. A disadvantage of some neural network methods is that the parameters of the network sometimes have no physical meaning and are difficult to interpret. [Pg.338]

The Bayesian network technology embedded in the ARBITER tool is also well suited for learning both probability relationships (e.g., method reliability estimates) and the essential structure of cause and effect, from data sets where predictions and outcomes can be compared. Colleagues have already applied this capability on a large scale for risk management (selection of potentially suspect claims for further inspection and examination) in the insurance industry. [Pg.271]

The knowledge required to implement Bayes formula is daunting in that a priori as well as class conditional probabilities must be known. Some reduction in requirements can be accomplished by using joint probability distributions in place of the a priori and class conditional probabilities. Even with this simplification, few interpretation problems are so well posed that the information needed is available. It is possible to employ the Bayesian approach by estimating the unknown probabilities and probability density functions from exemplar patterns that are believed to be representative of the problem under investigation. This approach, however, implies supervised learning where the correct class label for each exemplar is known. The ability to perform data interpretation is determined by the quality of the estimates of the underlying probability distributions. [Pg.57]

One way to develop an in silica tool to predictive promiscuity is to apply a NB classifier for modeling, a technique that compares the frequencies of features between selective and promiscuous sets of compounds. Bayesian classification was applied in many studies and was recently compared to other machine-learning techniques [26, 27, 43, 51, 52]. [Pg.307]

More complex approaches to this problem involve the use of artificial neural networks [22], Bayesian networks [23] and support vector machines [24], which in turn are based on the same principle of supervised learning [25]. [Pg.556]

Many different methods can be applied to virtual screening, and such methods are described in other chapters of this book and/or in the Handbooks of Che-minformatics Here we discuss the methods based on a probabilistic approach. Unfortunately, there are many publications in which the probabilistic or statistical approach items are farfetched. The Binary Kernel Discrimination and the Bayesian Machine Learning Models are actually special... [Pg.191]

Bahler D, Stone B, Wellington C, Bristol DW. Symbolic, neural, and Bayesian machine learning models for predicting carcinogenicity of chemical compounds. / Chem Inf Comput Sci 2000 40 906-14. [Pg.203]

Domingos P, Pazzani M. On the optimality of the simple Bayesian classifier under zero-one loss. Mach Learn 1997 29 103-30. [Pg.343]

There are many ways of characterizing different statistical machine-learning methods and protocols, but in this section, they will be organized into linear and nonlinear methods (even though the descriptor matrix they operate on may contain higher order terms and cross-terms) as well as rule-based and Bayesian methods. [Pg.388]


See other pages where Bayesian learning is mentioned: [Pg.94]    [Pg.38]    [Pg.94]    [Pg.38]    [Pg.314]    [Pg.365]    [Pg.340]    [Pg.21]    [Pg.120]    [Pg.131]    [Pg.360]    [Pg.205]    [Pg.297]    [Pg.160]    [Pg.13]    [Pg.35]    [Pg.8]    [Pg.251]    [Pg.258]    [Pg.186]    [Pg.1]    [Pg.26]    [Pg.182]    [Pg.218]    [Pg.108]    [Pg.301]    [Pg.398]    [Pg.83]    [Pg.205]    [Pg.496]    [Pg.498]    [Pg.138]    [Pg.161]    [Pg.822]    [Pg.411]    [Pg.66]    [Pg.210]    [Pg.166]   
See also in sourсe #XX -- [ Pg.43 ]




SEARCH



Bayesian

Bayesians

© 2024 chempedia.info