Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Support vector machines, classification and

Wang HW, Lin YC, Pai TW, Chang HT (2011) Prediction ofB-cell linear epitopes with a combination of support vector machine classification and amino acid propensity identification. J Biomed Biotechnol 2011(43) 28-30... [Pg.137]

Furey TS, Christianini N, Duffy N, Bednarski DW, Schummer M, Haussler D. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics 2000 16 906-14. [Pg.426]

In the last few decades, several methods for the training of various types of predicting functions [117] were developed using inferential statistics. Most important ene linear models, artificial neural networks, support vector machines, classification and regression trees emd the method of k nearest neighbors. [Pg.10]

Bioinformatics, 16, 906 (2000). Support Vector Machine Classification and Validation of Cancer Tissue Samples Using Microarray Expression Data. [Pg.415]

Gist, http //microarray.cpmc.columbia.edu/gist/. Gist is a C implementation of support vector machine classification and kernel principal components analysis. The SVM part of Gist is available as an interactive Web server at http //svm.sdsc.edu. It is a very convenient server for users who want to experiment with small datasets (hundreds of patterns). Kernels available include linear, polynomial, and radial. [Pg.389]

Jia, L. and Sun, H. (2008) Support vector machines classification of hERG liabilities based on atom types. Bioorganic ei Medicinal Chemistry, 16, 6252—6260. [Pg.412]

Xue Y, Li ZR, Yap CW, Sun I.Z, Chen X, Chen YZ. Effect of molecular descriptor feature selection in support vector machine classification of pharmacokinetic and toxicological properties of chemical agents. J Chem Inf Comput Sci 2004 44 1630-8. [Pg.237]

The most popular classification methods are Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Regularized Discriminant Analysis (RDA), Kth Nearest Neighbors (KNN), classification tree methods (such as CART), Soft-Independent Modeling of Class Analogy (SIMCA), potential function classifiers (PFC), Nearest Mean Classifier (NMC), Weighted Nearest Mean Classifier (WNMC), Support Vector Machine (SVM), and Classification And Influence Matrix Analysis (CAIMAN). [Pg.122]

Ivanciuc, O. (2003e) Support vector machines classification of black and green teas based on their metal content. Internet Electron.]. Mol. Des., 2, 348-357. [Pg.1075]

Li, Q., Jorgensen, F.S., Oprea, T, Brunak, S., Taboureau, O. hERG classification model based on a combination of support vector machine method and GRIND descriptors. Mol. Pharmaceut. 2008, 5,117-27. [Pg.215]

Mertens, B. Thompson, M. Fearn, T. (1994). Principal component outlier detection and SIMCA a synthesis. Analyst. Vol. 119, pp. 2777-2784. ISSN 0003-2654 Miller, J.N. Miller, J.C. (2005). Statistics and Chemometrics for Analytical Chemistry. 4 edition. Prentice-Hall, Pearson. ISBN 0131291920. Harlow, UK Naes, T. Isaksson, T. Fearn, T. Davies, T. (2004). A user-friendly guide to multivariate calibration and classification. NIR Publications, ISBN 0952866625, Chichester, UK Pardo, M. Sberveglieri, G. (2005). Classification of electronic nose data with support vector machines. Sensors and Actuators. Vol. 107, pp. 730-737. ISSN 0925-4005 Pretsch, E. Wilkins, C.L. (2006). Use and abuse of Chemometrics. Trends in Analytical Chemistry. Vol. 25, p. 1045. ISSN 0165-9936... [Pg.38]

There are a number of classification methods for analyzing data, including artificial neural (ANNs see Beale and Jackson, 1990) networks, -nearest-neighbor (fe-NN) methods, decision trees, support vector machines (SVMs), and Fisher s linear discriminant analysis (LDA). Among these methods, a decision tree is a flow-chart-like tree stmcture. An intermediate node denotes a test on a predictive attribute, and a branch represents an outcome of the test. A terminal node denotes class distribution. [Pg.129]

Support Vector Machine (SVM) is a tool for machine training using the optimization techniques. For each composition, after mapping the above-mentioned principal components into a higher dimensional space, the classification hyper-plane and discrimination functions were thereafter computed using the support vector machine classification technique. Thirteen samples, numbered 1 to 3, 5 to 9,11 to 14, and 17, listed in Table 5, were taken as given parameters. The features of these 13 samples were then mapped into a high-dimensional space. SVM-based... [Pg.665]

After mapped into a high dimensional space for the principal components, the classification hyper-plane and discrimination functions were computed for each composition using the support vector machine classification technique. The recognition of compositions in granite was easily conducted using the corresponding discrimination functions and... [Pg.666]

G. M. Fung and O. L. Mangasarian, Comput. Optim. AppL, 28, 185-202 (2004). A Feature Selection Newton Method for Support Vector Machine Classification. [Pg.394]

Effect of Molecular Descriptor Feature Selection in Support Vector Machine Classification of Pharmacokinetic and Toxicological Properties of Chemical Agents. [Pg.395]

M.-L. O Connell, T. Howley, A.G. Ryder, M.N. Leger and M.G. Madden, Classification of a target analyte in solid mixtures using principal component analysis, support vector machines, and Raman spectroscopy, Proc. SPIE-Int. Soc. Opt. Eng., 5826, 340-350 (2005). [Pg.236]

Ciosek, P., Brudzewski, K., and Wroblewski, W. (2006a). Milk classification by means of an electronic tongue and Support Vector Machine neural network. Meas. Sci. Technol. 17(6), 1379-1384. [Pg.110]

Support Vector Machine (SVM) is a classification and regression method developed by Vapnik.30 In support vector regression (SVR), the input variables are first mapped into a higher dimensional feature space by the use of a kernel function, and then a linear model is constructed in this feature space. The kernel functions often used in SVM include linear, polynomial, radial basis function (RBF), and sigmoid function. The generalization performance of SVM depends on the selection of several internal parameters of the algorithm (C and e), the type of kernel, and the parameters of the kernel.31... [Pg.325]


See other pages where Support vector machines, classification and is mentioned: [Pg.108]    [Pg.49]    [Pg.1203]    [Pg.12]    [Pg.653]    [Pg.502]    [Pg.248]    [Pg.661]    [Pg.468]    [Pg.2]    [Pg.374]    [Pg.364]    [Pg.98]    [Pg.116]    [Pg.18]    [Pg.237]    [Pg.397]    [Pg.160]    [Pg.458]    [Pg.360]    [Pg.44]    [Pg.121]    [Pg.195]    [Pg.448]    [Pg.297]    [Pg.295]    [Pg.136]    [Pg.205]   


SEARCH



Support vector machines

Support vectors

Supported vector machine

© 2024 chempedia.info