Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Naive Bayes

A machine-learning method was proposed by Klon et al. [104] as an alternative form of consensus scoring. The method proved unsuccessful for PKB, but showed promise for the phosphatase PTPIB (protein tyrosine phosphatase IB). In this approach, compounds were first docked into the receptor and scored using conventional means. The top scoring compounds were then assumed to be active and used to build a naive Bayes classification model, all compounds were subsequently re-scored and ranked using the model. The method is heavily dependent upon predicting accurate binding... [Pg.47]

Glick, M., Klon, A.E., Acklin, P., and Davies, J.W., Enrichment of extremely noisy high-throughput screening data using a naive Bayes classifier, J. Biomol. Screen., 9, 32, 2004. [Pg.101]

Cao, J., Panetta, R., Yue, S., Steyaert, A., Young-BelKdo, M. and Ahmad, S. (2003) A naive Bayes model to predict coupling between seven transmembrane domain receptors and G-proteins. Bioinformatics 19, 234-240. [Pg.54]

Technische Universitat Wien Does latent class analysis, short-time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier, etc. (http //cran.r-project.org/ web/packages/el071/index.html)... [Pg.24]

Later Sweredoski et al. (37) incorporated a combination of amino acid propensity scores and half-sphere exposure values at multiple distances to form the BEpro tool (formerly called PEPITO). Using the Epitopia algorithm, Rubinstein et al. (38) for the first time truly exploited an extensive set of physicochemical and structural geometrical features from an antigen s primary or tertiary structures. They trained the Naive Bayes classifier using a benchmark dataset of 66 and 194 validated nonredundant epitopes derived from antibody-antigen structures and antigen sequences,... [Pg.133]

Epitopia Physicochemical and structural geometrical features with Naive Bayes http //epitopia.tau.ac.il/ Rubinstein et al. (38)... [Pg.134]

Diverse Three TAACF datasets from PubChem 179 Naive Bayes, random forest, sequential minimal optimization, J48 decision tree. Used to create three models with different datasets. Naive Bayes had external test set accuracy 73-82.7, random forest 60.7-82.7%, SMO 55.9-83.3, and J48 61.3-80% Periwal et al. (36, 37)... [Pg.249]

Data Discovery, bioinformatics and cheminformatics, and called naive Bayes... [Pg.193]

When applied to virtual screening the naive Bayes classifier consists in the following. [Pg.193]

Clearly, the constant can be included into threshold value B, so that the function /o(C) = 1 is not necessary. We must stress that in such form the probabilistic approach has no tuned parameters at all. Some tuning of naive Bayes classifier can be performed by selection of the molecular structure descriptors [or /(C)] set. This is a wonderful feature in contrast to QSAR methods, especially to Artificial Neural Networks. [Pg.194]

The naive Bayes approach has several well-known difficulties. The conditional independence of descriptors of a molecule structure is not true as a rule. The probability P A Di) estimations can be close or even equal to 0 or 1 and in such case coefficients a,- become too large or infinite. To overcome this problem, we have substituted the logarithms of probabilities ratios n[P A D j —P A Di )] for ArcSin(2P(T Z),)—1). The ArcSin(2P( Z),)—1) shape coincides with the shape of ln[P(T Z),)/(l—P( Ai))] for almost all values of P(A Z),), but ArcSin(2P( P)i)—1) values are bounded by the values ti/2. [Pg.194]

Interestingly, the naive Bayes approach is too simple , but as a rule it provides high accuracy of recognition. ... [Pg.194]

Sun [54] reported a naive Bayes classifier built around a training set of 1979 compounds with measured hERG activity from the Roche corporate collection. For the training set, 218 in-house atom-type descriptors were used to develop the model, and pICso = 4.52 was set as a threshold between hERG actives and inactives. Receiver operator curve (ROC) accuracy of 0.87 was achieved. The model was validated on an external set of 66 drugs, of which 58 were classified correctly (88% accuracy). [Pg.361]

Cannon, E.O., Amini, A., Bender, A., Sternberg, M.J. E., Muggleton, S.H., Glen, R.C. and Mitchell, J.B. O. (2007) Support vector inductive logic programming outperforms the naive Bayes classifier and inductive logic programming for the classification of bioactive chemical compounds. [Pg.1003]

Klon, A.E., Glick, M. and Davies, J.W (2004) Combination of a naive Bayes classifier with consensus scoring improves enrichment of high-throughput docking results. /. Med. Chem., 47, 4356-4359. [Pg.1094]

Srm, H. (2005) A naive Bayes classifier for prediction of mrrltidrrrg resistance reversal activity on the basis ofatomtypirrg./. Med. Chem., 48,4031—4039. [Pg.1176]

Another work aims at classifying candidate correspondences (either as relevant or not) by analysing their features [Naumann et al. 2002], The features represent boolean properties over data instance, such as presence of delimiters. Thus, selecting an appropriate feature set is a first parameter to deal with. The choice of a classifier is also important, and authors propose, by default, the Naive Bayes classifier for categorical data and quantile-based classifier for numerical data. [Pg.299]

Similarity measures based on machine learning may not always stand for the most effective. The ASID matcher [Bozovic and Vassalos 2008] considers its Naive Bayes classifier (against schema instances) as a less credible similarity measure, which is applied after user (in)validation of initial results provided by more reliable measures (Jaro and TF/IDF). We think that this credibility of machine learning-based similarity measures heavily depends on the quality of their training data. [Pg.299]

Rennie, J. D. M. 2001. Improving multi-class text classification with Naive Bayes. MA thesis. Massachusetts Institute of Technology. [Pg.191]

Other studies have focused on identifying surface residues that bind RNA. For example, this problem has been cast in the binary classification setting where the data comprises annotated structures gathered in a fashion similar to DNA-binding residue prediction these works have employed a number of classifiers including neural networks [48,49,52], SVM [53] and Naive Bayes [54]. This problem has also been cast in the structured-prediction setting, which is decomposed into a binary classification problem (solved by neural networks) followed by post-processing... [Pg.49]

Watson, P. Naive Bayes classification using 2D pharmacophore feature triplet vectors. J. Chem. Inf. Model. 2008,48,166-78. [Pg.215]

Mixed-integer programming hyperboxes classification, Bayes Network, Naive Bayes, Liblinear, LibSVM, RBF network, SMO, Logistic, IBk, Bagging, Ensemble selection, Logit Boost, LMT, NBTree, Random Forest, DTNB... [Pg.325]

Yousef M, Jung S, Kossenkov AV, Showe LC, Showe MK 2007. Naive Bayes for microRNA target predictions - machine learning for microRNA targets. Bioinformatics... [Pg.467]

A naive Bayes classifier is a simple probabilistic classifier based on the so-called Bayes theorem with strong independence assumptions and is particularly suited when the dimensionality of the inputs is high. The naive Bayes model assumes that, given a class r = j, the features X, are independent. Despite its simplicity, the naive Bayes classifier is known to be a robust method even if the independence assumption does not hold (Michalski and Kaufman, 2001). [Pg.132]

The probabUistic model for a naive Bayes classifier is a conditional model P(T Xi, X2,..., X ) over a dependent class variable F, conditional on features Xi, X2, X. Using Bayes s theorem, F(F Xj,..., X ) oc P(F) 7(Xi,..., X F). The prior probability F(F = j) can be calculated based on the ratio of the class j samples such as P(F = 7) = (number of class j samples)/(total number of samples). Having formulated the prior probabihties, the likelihood function p(Xi, X2,..., X F) can be written as ]/[ j p(Xi F) under the naive conditional independence assumptions of the feature X, with the feamre Xj for j i. A new sample is classified to a class with maximum posterior probability, which is argmaxr erF (r7)nr ( i 1 /)- If the independence assumption is correct, it is the Bayes optimal classifier for a problem. Extensions of the naive Bayes classifier can be found in Demichelis et al. (2006). [Pg.132]

Naive Bayes To use the naive Bayes function in R, the package el071 should be included using library(el071). With a learning dataset, this function can be used as... [Pg.153]

Demichelis, R, Magni, R, Piergiorgi, R, Rubin M. A., and Bellazzi, R. (2006). A hierarchical naive Bayes model for handling sample heterogeneity in classification problems An application to tissue micioarrays. BMC Bioinformatics, 7 514. [Pg.154]


See other pages where Naive Bayes is mentioned: [Pg.211]    [Pg.116]    [Pg.122]    [Pg.28]    [Pg.25]    [Pg.194]    [Pg.60]    [Pg.178]    [Pg.178]    [Pg.185]    [Pg.209]    [Pg.162]    [Pg.46]    [Pg.212]    [Pg.205]    [Pg.271]    [Pg.318]    [Pg.132]    [Pg.151]   
See also in sourсe #XX -- [ Pg.28 ]

See also in sourсe #XX -- [ Pg.178 , Pg.185 , Pg.209 , Pg.210 ]




SEARCH



Bayes classifier, naive

Naive

Naive Bayes method

© 2024 chempedia.info