Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayes classification

A machine-learning method was proposed by Klon et al. [104] as an alternative form of consensus scoring. The method proved unsuccessful for PKB, but showed promise for the phosphatase PTPIB (protein tyrosine phosphatase IB). In this approach, compounds were first docked into the receptor and scored using conventional means. The top scoring compounds were then assumed to be active and used to build a naive Bayes classification model, all compounds were subsequently re-scored and ranked using the model. The method is heavily dependent upon predicting accurate binding... [Pg.47]

Supervised learning methods - multivariate analysis of variance and discriminant analysis (MVDA) - k nearest neighbors (kNN) - linear learning machine (LLM) - BAYES classification - soft independent modeling of class analogy (SIMCA) - UNEQ classification Quantitative demarcation of a priori classes, relationships between class properties and variables... [Pg.7]

Setser and Thrush [116, 147] extended Bayes classification by observing bands from B 2E+ (5 v < 15) and suggesting that these formed part of the... [Pg.40]

Watson, P. Naive Bayes classification using 2D pharmacophore feature triplet vectors. J. Chem. Inf. Model. 2008,48,166-78. [Pg.215]

Naive Bayes classification uses the Bayes theorem together with a naive independence assumption to calculate probabilities of class membership. Predicting class membership can be done directly using Bayes theorem with only one feature and the prior probability. As an example, consider that we are investigating bioimpedance as a marker... [Pg.385]

An increase in the number of neighbours permits to estimate the probabilities of all classes at the point which is defined by the unknown pattern. With a large number of neighbours the KNN-method becomes a modified Bayes classification (Chapter 5) C2003. [Pg.64]

The Bayes classifier minimizes the total expected loss. The minimum expected risk of a Bayes classification can be calculated if all... [Pg.79]

Friedman [12] introduced a Bayesian approach the Bayes equation is given in Chapter 16. In the present context, a Bayesian approach can be described as finding a classification rule that minimizes the risk of misclassification, given the prior probabilities of belonging to a given class. These prior probabilities are estimated from the fraction of each class in the pooled sample ... [Pg.221]

Linear discriminant analysis (LDA) is also a probabilistic classifier in the mold of Bayes algorithms but can be related closely to both regression and PCA techniques. A discriminant function is simply a function of the observed vector of variables (K) that leads to a classification rule. The likelihood ratio (above), for example, is an optimal discriminant for the two-class case. Hence, the classification rule can be stated as... [Pg.196]

Figure 3.9 Stratification-circulation diagrams used to describe a spectrum of circulation and geomorphometric types of estuaries that can be defined by stratification. Estuarine types are as follows Type 1 estuaries are those without upstream flow requiring tidal transport for salt balance Type 2 estuaries are partially mixed (e.g., Marrows of the Mersey (NM) (UK), James River (J) (USA), Columbia River estuary (C) (USA) Type 3 estuaries are representative of fjords [e.g., Siver Bay (S), Strait of Juan de Fuca (JF) (USA)] and Type 4 estuaries indicative of salt wedge estuaries [e.g., Mississippi River (M) (USA)]. The basic classification parameters are as follows the stratification is defined by SS/Sq where SS is the difference in the salinity between surface and bottom water and So is the mean-depth salinity, both averaged over a tidal cycle and Us/Uf, where U is the surface velocity (averaged over a tidal cycle) and Uf is the vertically averaged net outflow. The subdivisions a and b represent values where SS/Sq <0.1 and SS/Sq >0.1, respectively subscripts h and 1 refer to high and low river flow. The curved line at the top represents the limit of surface freshwater outflow. (From Hansen and Rattray, 1966, as modified by Jay et al., 2000, with permission.)... Figure 3.9 Stratification-circulation diagrams used to describe a spectrum of circulation and geomorphometric types of estuaries that can be defined by stratification. Estuarine types are as follows Type 1 estuaries are those without upstream flow requiring tidal transport for salt balance Type 2 estuaries are partially mixed (e.g., Marrows of the Mersey (NM) (UK), James River (J) (USA), Columbia River estuary (C) (USA) Type 3 estuaries are representative of fjords [e.g., Siver Bay (S), Strait of Juan de Fuca (JF) (USA)] and Type 4 estuaries indicative of salt wedge estuaries [e.g., Mississippi River (M) (USA)]. The basic classification parameters are as follows the stratification is defined by SS/Sq where SS is the difference in the salinity between surface and bottom water and So is the mean-depth salinity, both averaged over a tidal cycle and Us/Uf, where U is the surface velocity (averaged over a tidal cycle) and Uf is the vertically averaged net outflow. The subdivisions a and b represent values where SS/Sq <0.1 and SS/Sq >0.1, respectively subscripts h and 1 refer to high and low river flow. The curved line at the top represents the limit of surface freshwater outflow. (From Hansen and Rattray, 1966, as modified by Jay et al., 2000, with permission.)...
Probabilistic neural network (PNN) is similar to GRNN except that it is used for classification problems [54], It has been used for pharmacodynamics [55], pharmacokinetics [34,56] studies and has recently been applied for genotoxicity [43,50,57] and torsade de pointes prediction [58], PNN classifies compounds into their data class through the use of Bayes s optimal decision rule ... [Pg.224]

One should note that test independence is important for accurate probability estimation, but not necessarily for classification. In classification, one has to determine only which disease is the more likely, not its exact probability. Studies have shown that using Bayes estimation gives quite good diagnostic accuracy even when the tests are not independent. Tests with correlations of up to 0.7 can stiU be used together to give an idea of the most likely disease. [Pg.415]

The Bayes rule simply states that a sample or object should be assigned to that group having the highest conditional probability and application of this rule to parametric classification schemes provides optimum discriminating capability. An explanation of the term conditional probability is perhaps in order here. [Pg.127]

The linear discriminant function is a most commonly used classification technique and it is available with all the most popular statistical software packages. It should be borne in mind, however, that it is only a simplification of the Bayes classifier and assumes that the variates are obtained from a multivariate normal distribution and that the groups have similar covariance matrices. If these conditions do not hold then the linear discriminant function should be used with care and the results obtained subject to careful analysis. [Pg.138]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes. [Pg.142]

There exist evidences, some of them known already for many years (Coksey, Massie and Walsh, 1960), which show various limitations to the above-mentioned Q-wave-MI necrotic areas correlation. Later, different papers published on correlations between ECG findings, and various imaging techniques have been key for recognising the limitations ofthis classical classification. We will now comment on these limitations and propose a new classification based on the standard of CMR correlation (Bayes de Luna, et al., 2006a). [Pg.23]

MRI has also demonstrated that there are transmural infarctions without Q waves and Q-wave infarctions that are not transmural (Moon et al, 2002) (see MI with or without Q waves or equivalents p. 275). Furthermore, thanks to ECG-MRI correlations, we have published a new classification of Q-wave MI (Bayes de Luna et al, 2006a-c Cino et al, 2006). Also, we have demonstrated that the presence of RS pattern in VI in coronary patients is... [Pg.305]

The classification rule in conjunction with Bayes rule is used [126, 36] so that the posterior probability (Eq. 3.38) assuming F(7rfc x) = 1 that the class membership of the observation xq is T This assumption may lead to a situation where the observation will be classified wrongly to one of the fault cases which were used to develop the FDA discriminant when an unknown fault occurs. Chiang et al. [36] proposed several screening procedures to detect unknown faults. One of them involves FDA related T -statistic before applying Eq. 3.59 as... [Pg.57]

Cannon, E.O., Amini, A., Bender, A., Sternberg, M.J. E., Muggleton, S.H., Glen, R.C. and Mitchell, J.B. O. (2007) Support vector inductive logic programming outperforms the naive Bayes classifier and inductive logic programming for the classification of bioactive chemical compounds. [Pg.1003]

Rennie, J. D. M. 2001. Improving multi-class text classification with Naive Bayes. MA thesis. Massachusetts Institute of Technology. [Pg.191]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes, but there has been much work devoted to the development of simpler linear classification rules. One such method, which has featured extensively in spectroscopic pattern recognition studies, is the perceptron algorithm. [Pg.148]


See other pages where Bayes classification is mentioned: [Pg.212]    [Pg.587]    [Pg.357]    [Pg.455]    [Pg.212]    [Pg.587]    [Pg.357]    [Pg.455]    [Pg.80]    [Pg.50]    [Pg.116]    [Pg.122]    [Pg.129]    [Pg.346]    [Pg.6]    [Pg.31]    [Pg.352]    [Pg.478]    [Pg.80]    [Pg.59]    [Pg.23]    [Pg.275]    [Pg.338]    [Pg.209]    [Pg.101]    [Pg.134]   
See also in sourсe #XX -- [ Pg.357 ]




SEARCH



© 2024 chempedia.info