Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Naive Bayes’ classifier

Glick, M., Klon, A.E., Acklin, P., and Davies, J.W., Enrichment of extremely noisy high-throughput screening data using a naive Bayes classifier, J. Biomol. Screen., 9, 32, 2004. [Pg.101]

Technische Universitat Wien Does latent class analysis, short-time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier, etc. (http //cran.r-project.org/ web/packages/el071/index.html)... [Pg.24]

Later Sweredoski et al. (37) incorporated a combination of amino acid propensity scores and half-sphere exposure values at multiple distances to form the BEpro tool (formerly called PEPITO). Using the Epitopia algorithm, Rubinstein et al. (38) for the first time truly exploited an extensive set of physicochemical and structural geometrical features from an antigen s primary or tertiary structures. They trained the Naive Bayes classifier using a benchmark dataset of 66 and 194 validated nonredundant epitopes derived from antibody-antigen structures and antigen sequences,... [Pg.133]

When applied to virtual screening the naive Bayes classifier consists in the following. [Pg.193]

Clearly, the constant can be included into threshold value B, so that the function /o(C) = 1 is not necessary. We must stress that in such form the probabilistic approach has no tuned parameters at all. Some tuning of naive Bayes classifier can be performed by selection of the molecular structure descriptors [or /(C)] set. This is a wonderful feature in contrast to QSAR methods, especially to Artificial Neural Networks. [Pg.194]

Sun [54] reported a naive Bayes classifier built around a training set of 1979 compounds with measured hERG activity from the Roche corporate collection. For the training set, 218 in-house atom-type descriptors were used to develop the model, and pICso = 4.52 was set as a threshold between hERG actives and inactives. Receiver operator curve (ROC) accuracy of 0.87 was achieved. The model was validated on an external set of 66 drugs, of which 58 were classified correctly (88% accuracy). [Pg.361]

Cannon, E.O., Amini, A., Bender, A., Sternberg, M.J. E., Muggleton, S.H., Glen, R.C. and Mitchell, J.B. O. (2007) Support vector inductive logic programming outperforms the naive Bayes classifier and inductive logic programming for the classification of bioactive chemical compounds. [Pg.1003]

Klon, A.E., Glick, M. and Davies, J.W (2004) Combination of a naive Bayes classifier with consensus scoring improves enrichment of high-throughput docking results. /. Med. Chem., 47, 4356-4359. [Pg.1094]

Srm, H. (2005) A naive Bayes classifier for prediction of mrrltidrrrg resistance reversal activity on the basis ofatomtypirrg./. Med. Chem., 48,4031—4039. [Pg.1176]

Another work aims at classifying candidate correspondences (either as relevant or not) by analysing their features [Naumann et al. 2002], The features represent boolean properties over data instance, such as presence of delimiters. Thus, selecting an appropriate feature set is a first parameter to deal with. The choice of a classifier is also important, and authors propose, by default, the Naive Bayes classifier for categorical data and quantile-based classifier for numerical data. [Pg.299]

Similarity measures based on machine learning may not always stand for the most effective. The ASID matcher [Bozovic and Vassalos 2008] considers its Naive Bayes classifier (against schema instances) as a less credible similarity measure, which is applied after user (in)validation of initial results provided by more reliable measures (Jaro and TF/IDF). We think that this credibility of machine learning-based similarity measures heavily depends on the quality of their training data. [Pg.299]

A naive Bayes classifier is a simple probabilistic classifier based on the so-called Bayes theorem with strong independence assumptions and is particularly suited when the dimensionality of the inputs is high. The naive Bayes model assumes that, given a class r = j, the features X, are independent. Despite its simplicity, the naive Bayes classifier is known to be a robust method even if the independence assumption does not hold (Michalski and Kaufman, 2001). [Pg.132]

The probabUistic model for a naive Bayes classifier is a conditional model P(T Xi, X2,..., X ) over a dependent class variable F, conditional on features Xi, X2, X. Using Bayes s theorem, F(F Xj,..., X ) oc P(F) 7(Xi,..., X F). The prior probability F(F = j) can be calculated based on the ratio of the class j samples such as P(F = 7) = (number of class j samples)/(total number of samples). Having formulated the prior probabihties, the likelihood function p(Xi, X2,..., X F) can be written as ]/[ j p(Xi F) under the naive conditional independence assumptions of the feature X, with the feamre Xj for j i. A new sample is classified to a class with maximum posterior probability, which is argmaxr erF (r7)nr ( i 1 /)- If the independence assumption is correct, it is the Bayes optimal classifier for a problem. Extensions of the naive Bayes classifier can be found in Demichelis et al. (2006). [Pg.132]

FIGURE 1 A visualization of the Naive-Bayes classifier. SOURCE Reprinted with permission from AAAI Press (Brunk et al., 1997). [Pg.33]

The naive Bayes classifier is a simple and popular approach in word sense disambiguation. In this, we attempt to estimate the probability that we have word w, is llie correct one for a token based on a number of features ... [Pg.87]

Popular classifiers include context-sensitive rewrite rules, decision lists, decision trees, naive Bayes classifiers and HMM taggers. [Pg.110]

Prediction of Drug-Induced PT Toxicity and Injury Mechanisms with an hiPSC-Based Model and Machine Learning Methods The weak points of the HPTC- and hESC-based models described previously (Sections 23.3.2.1 and 23.3.3.1) were the data analysis procedures. In order to improve result classification, the raw data obtained with three batches of HPTC and the 1L6/1L8-based model (Li et al., 2013) were reanalyzed by machine learning (Su et al., 2014). Random forest (RE), support vector machine (SVM), k-NN, and Naive Bayes classifiers were tested. Best results were obtained with the RF classifier and the mean values (three batches of HPTC) ranged between 0.99 and 1.00 with respect to sensitivity, specificity, balanced accuracy, and AUC/ROC (Su et al., 2014). Thus, excellent predictivity could be obtained by combining the lL6/lL8-based model with automated classification by machine learning. [Pg.378]


See other pages where Naive Bayes’ classifier is mentioned: [Pg.211]    [Pg.116]    [Pg.122]    [Pg.60]    [Pg.162]    [Pg.205]    [Pg.318]    [Pg.132]    [Pg.32]    [Pg.87]    [Pg.87]    [Pg.88]    [Pg.94]    [Pg.86]    [Pg.87]    [Pg.87]    [Pg.93]    [Pg.385]    [Pg.386]    [Pg.138]    [Pg.275]   
See also in sourсe #XX -- [ Pg.132 ]




SEARCH



Bayes classifier

Classified

Classifier

Classifying

Naive

Naive Bayes

© 2024 chempedia.info