Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Machine nearest neighbors

In this study, a machine learning model system was developed to classify cell line chemosensitivity exclusively based on proteomic profiling. Using reverse-phase protein lysate microarrays, protein expression levels were measured by 52 antibodies in a panel of 60 human cancer cell (NCI-60) lines. The model system combined several well-known algorithms, including Random forests, Relief, and the nearest neighbor methods, to construct the protein expression-based chemosensitivity classifiers. [Pg.293]

Histone deacetylases (HDACs) play a critical role in transcription regulation. Small molecule HDAC inhibitors have become an emerging target for the treatment of cancer and other cell proliferation diseases. We have employed variable selection k nearest neighbor approach (iNN)and support vector machines (SVM) approach to generate QSAR models for 59 chemically diverse... [Pg.118]

Supervised learning methods - multivariate analysis of variance and discriminant analysis (MVDA) - k nearest neighbors (kNN) - linear learning machine (LLM) - BAYES classification - soft independent modeling of class analogy (SIMCA) - UNEQ classification Quantitative demarcation of a priori classes, relationships between class properties and variables... [Pg.7]

The most popular classification methods are Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Regularized Discriminant Analysis (RDA), Kth Nearest Neighbors (KNN), classification tree methods (such as CART), Soft-Independent Modeling of Class Analogy (SIMCA), potential function classifiers (PFC), Nearest Mean Classifier (NMC), Weighted Nearest Mean Classifier (WNMC), Support Vector Machine (SVM), and Classification And Influence Matrix Analysis (CAIMAN). [Pg.122]

Recursive partitioning, Bayesian classifier, logisi-tic regression, k-nearest neighbor, support vector machine... [Pg.325]

There are a number of classification methods for analyzing data, including artificial neural (ANNs see Beale and Jackson, 1990) networks, -nearest-neighbor (fe-NN) methods, decision trees, support vector machines (SVMs), and Fisher s linear discriminant analysis (LDA). Among these methods, a decision tree is a flow-chart-like tree stmcture. An intermediate node denotes a test on a predictive attribute, and a branch represents an outcome of the test. A terminal node denotes class distribution. [Pg.129]

Wassermann, A. M., Geppert, H., and Bajorath, J. 2009. Ligand prediction for orphan targets using support vector machines and various target-hgand kernels is dominated by nearest neighbor effects. J. Chem. Inf. Model. 49 2155-2167. [Pg.203]


See other pages where Machine nearest neighbors is mentioned: [Pg.424]    [Pg.97]    [Pg.366]    [Pg.99]    [Pg.324]    [Pg.301]    [Pg.367]    [Pg.150]    [Pg.132]    [Pg.120]    [Pg.119]    [Pg.160]    [Pg.337]    [Pg.205]    [Pg.351]    [Pg.93]    [Pg.319]    [Pg.192]    [Pg.232]    [Pg.307]    [Pg.308]    [Pg.582]    [Pg.59]    [Pg.416]    [Pg.357]    [Pg.300]    [Pg.496]    [Pg.293]    [Pg.326]    [Pg.653]    [Pg.496]    [Pg.226]    [Pg.41]    [Pg.106]    [Pg.125]    [Pg.385]    [Pg.214]    [Pg.173]    [Pg.12]    [Pg.249]   
See also in sourсe #XX -- [ Pg.325 ]




SEARCH



Nearest neighbors

Neighbor

© 2024 chempedia.info