Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random forest classifiers

In this study, a machine learning model system was developed to classify cell line chemosensitivity exclusively based on proteomic profiling. Using reverse-phase protein lysate microarrays, protein expression levels were measured by 52 antibodies in a panel of 60 human cancer cell (NCI-60) lines. The model system combined several well-known algorithms, including Random forests, Relief, and the nearest neighbor methods, to construct the protein expression-based chemosensitivity classifiers. [Pg.293]

Random Forests (RFs) are a combination of the weak classifiers of tree, and this can be used as ensemble to produce a strong learner. Each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using randomly selected features to split each node will yield error rates that compare favorably to Adaboost, therefore RFs are expected to be more robust with respect to noise (Kotsiantis 2007). [Pg.446]

Prediction of Drug-Induced PT Toxicity and Injury Mechanisms with an hiPSC-Based Model and Machine Learning Methods The weak points of the HPTC- and hESC-based models described previously (Sections 23.3.2.1 and 23.3.3.1) were the data analysis procedures. In order to improve result classification, the raw data obtained with three batches of HPTC and the 1L6/1L8-based model (Li et al., 2013) were reanalyzed by machine learning (Su et al., 2014). Random forest (RE), support vector machine (SVM), k-NN, and Naive Bayes classifiers were tested. Best results were obtained with the RF classifier and the mean values (three batches of HPTC) ranged between 0.99 and 1.00 with respect to sensitivity, specificity, balanced accuracy, and AUC/ROC (Su et al., 2014). Thus, excellent predictivity could be obtained by combining the lL6/lL8-based model with automated classification by machine learning. [Pg.378]

WeKa also has its own implementation of Random Forest. It generates correctly classified instances (Features) and incorrectly classified instances along with a confusion matrix. [Pg.142]

Choose Trees from the classifier tab and open Random Forest from the list. [Pg.149]


See other pages where Random forest classifiers is mentioned: [Pg.395]    [Pg.446]    [Pg.192]    [Pg.395]    [Pg.446]    [Pg.192]    [Pg.148]    [Pg.455]    [Pg.458]    [Pg.175]    [Pg.496]    [Pg.53]    [Pg.256]    [Pg.496]    [Pg.131]    [Pg.445]    [Pg.445]    [Pg.259]    [Pg.138]    [Pg.425]    [Pg.253]    [Pg.184]    [Pg.389]   
See also in sourсe #XX -- [ Pg.256 ]




SEARCH



Classified

Classifier

Classifying

Random forests

© 2024 chempedia.info