Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Support vector machines linear classifiers

Support Vector Machines (SVMs) generate either linear or nonlinear classifiers depending on the so-called kernel [149]. The kernel is a matrix that performs a transformation of the data into an arbitrarily high-dimensional feature-space, where linear classification relates to nonlinear classifiers in the original space the input data lives in. SVMs are quite a recent Machine Learning method that received a lot of attention because of their superiority on a number of hard problems [150]. [Pg.75]

S.J. Dixon and R.G. Brereton, Comparison of performance of five common classifiers represented as boundary methods Euclidean distance to centroids, linear discriminant analysis, quadratic discriminant analysis, learning vector quantization and support vector machines, as dependent on data structure, Chemom. Intell. Lab. Syst, 95, 1-17 (2009). [Pg.437]

Support vector machines In addition to more traditional classification methods like clustering or partitioning, other computational approaches have recently also become popular in chemoinformatics and support vector machines (SVMs) (Warmuth el al. 2003) are discussed here as an example. Typically, SVMs are applied as classifiers for binary property predictions, for example, to distinguish active from inactive compounds. Initially, a set of descriptors is selected and training set molecules are represented as vectors based on their calculated descriptor values. Then linear combinations of training set vectors are calculated to construct a hyperplane in descriptor space that best separates active and inactive compounds, as illustrated in Figure 1.9. [Pg.16]

The most popular classification methods are Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Regularized Discriminant Analysis (RDA), Kth Nearest Neighbors (KNN), classification tree methods (such as CART), Soft-Independent Modeling of Class Analogy (SIMCA), potential function classifiers (PFC), Nearest Mean Classifier (NMC), Weighted Nearest Mean Classifier (WNMC), Support Vector Machine (SVM), and Classification And Influence Matrix Analysis (CAIMAN). [Pg.122]

Selection of the supervised classification technique or the combination of techniques suitable for accomplishing the classification task. Popular supervised classifiers are Multi-Layer Perceptron Artificial Neural Networks (MLP-ANN), Support Vector Machines (SVM), k-Nearest Neighbours (k-NN), combinations of genetic algorithms (GA) for feature selection with Linear Discriminant Analysis (LDA), Decision Trees and Radial Basis Function (RBF) classifiers. [Pg.214]

Such a diagnosis is obtained best using supervised or trained algorithms. A number of such predictive algorithms exist, some very similar to that of the principles of PCA (i.e. soft independent modeling of class analogy, SIMCA), linear discriminant analysis (LDA) to more compUcated classifiers such as support vector machines (SVMs), which are based on separating spectral classes by complicated, multidimensional separation planes [30]. At the LSpD, ANNs have been used [52-54] for supervised prediction of class memberships. [Pg.208]


See other pages where Support vector machines linear classifiers is mentioned: [Pg.307]    [Pg.309]    [Pg.148]    [Pg.237]    [Pg.160]    [Pg.723]    [Pg.297]    [Pg.205]    [Pg.175]    [Pg.496]    [Pg.478]    [Pg.432]    [Pg.141]    [Pg.188]    [Pg.12]    [Pg.496]    [Pg.129]    [Pg.226]    [Pg.106]    [Pg.125]    [Pg.143]    [Pg.137]    [Pg.424]    [Pg.122]    [Pg.750]    [Pg.84]    [Pg.155]    [Pg.76]    [Pg.302]    [Pg.306]    [Pg.351]    [Pg.498]   
See also in sourсe #XX -- [ Pg.315 ]




SEARCH



Classified

Classifier

Classifying

Linear classification support vector machine classifiers

Linear classifiers

Support classifier

Support vector machines

Support vectors

Supported vector machine

© 2024 chempedia.info