Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Decision tree classification algorithm

Attempts to differentiate TCC from benign urogenital diseases led to a training sample set for a decision tree classification algorithm which, in turn, yielded a mass cluster pattern. In a blinded test set (n = 38) sensitivity was 96.3% and specificity was 87.0% [98]. In another study, a training set utilizing 5/187 mass peaks (from 104 urine samples) was used to establish a pattern for tree analysis. The pattern correctly predicted 49/68 test samples, 25/45 TCC samples, and 24/33 noncancerous samples [99]. [Pg.391]

The algorithm that we employ to build a classification decision tree from (x, y) data records belongs to a group of techniques known as top-down induction of decision trees (TDIDT) (Sonquist et al., 1971 Fu, 1968 Hunt, 1962 Quinlan, 1986, 1987, 1993 Breiman et al., 1984). [Pg.114]

Key Words Biological activity chemical descriptors chemical spaces classification methods compound databases decision trees diversity selection partitioning algorithms space transformation statistics statistical medians. [Pg.291]

As we might expeet by this stage in the book, most of the usual classification/regression algorithms have been applied to the duration predietion problem. These include decision trees [372], neural networks for phone prediction [109], [157], genetic algorithms [319] and Bayesian belief networks [182], Comparative studies of deeision trees and neural networks found little difference in accuracy between either approaeh, [473], [187], [72],... [Pg.261]

The decision tree classifier is chosen for its favorable tradeoff between performance and implementation simplicity. Classification using DT is a supervised learning technique, the input of the learning algorithm is a set of known data and the output is a tree model similar to the ones shown in Figure 5. Once the tree is defined, the classification of new inputs starts at the root decision node of the tree and terminates at one of the leaf nodes that represent a specific class, passing by intermediate decision nodes. [Pg.217]

The Decision tree method is widely used for classification and regression. A decision tree is a flow-chart-like tree structure, where each internal node denotes a test on an attribute, each branch represents an outcome of the test, and leaf nodes represent classes or class distributions. In order to classify an unknown sample, the attribute values of the sample are tested according to the decision tree starting from the root until one of the leaves. To build decision trees, a data mining algorithm recursively inspects the available data set to find decisions that optimally split the data into distinguished subsets. An important property of this technique is that its functioning is easily understood. [Pg.172]

Selection of the supervised classification technique or the combination of techniques suitable for accomplishing the classification task. Popular supervised classifiers are Multi-Layer Perceptron Artificial Neural Networks (MLP-ANN), Support Vector Machines (SVM), k-Nearest Neighbours (k-NN), combinations of genetic algorithms (GA) for feature selection with Linear Discriminant Analysis (LDA), Decision Trees and Radial Basis Function (RBF) classifiers. [Pg.214]


See other pages where Decision tree classification algorithm is mentioned: [Pg.312]    [Pg.314]    [Pg.389]    [Pg.312]    [Pg.314]    [Pg.389]    [Pg.153]    [Pg.448]    [Pg.262]    [Pg.442]    [Pg.280]    [Pg.291]    [Pg.148]    [Pg.166]    [Pg.122]    [Pg.223]    [Pg.306]    [Pg.307]    [Pg.356]    [Pg.688]    [Pg.51]    [Pg.53]    [Pg.130]    [Pg.1776]    [Pg.445]    [Pg.445]    [Pg.230]    [Pg.228]    [Pg.196]    [Pg.143]    [Pg.385]    [Pg.214]    [Pg.248]    [Pg.614]    [Pg.475]    [Pg.171]    [Pg.1318]    [Pg.749]   
See also in sourсe #XX -- [ Pg.312 ]




SEARCH



Algorithm classification

Decision trees

Tree algorithms

© 2024 chempedia.info