Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear support vector machines

PATTERN CLASSIFICATION WITH LINEAR SUPPORT VECTOR MACHINES... [Pg.308]

The optimum separation hyperplane (OSFI) is the hyperplane with the maximum margin for a given finite set of learning patterns. The OSH computation with a linear support vector machine is presented in this section. [Pg.308]

Pattern Classification with Linear Support Vector Machines 309... [Pg.309]

Support Vector Machines (SVMs) generate either linear or nonlinear classifiers depending on the so-called kernel [149]. The kernel is a matrix that performs a transformation of the data into an arbitrarily high-dimensional feature-space, where linear classification relates to nonlinear classifiers in the original space the input data lives in. SVMs are quite a recent Machine Learning method that received a lot of attention because of their superiority on a number of hard problems [150]. [Pg.75]

S.J. Dixon and R.G. Brereton, Comparison of performance of five common classifiers represented as boundary methods Euclidean distance to centroids, linear discriminant analysis, quadratic discriminant analysis, learning vector quantization and support vector machines, as dependent on data structure, Chemom. Intell. Lab. Syst, 95, 1-17 (2009). [Pg.437]

Support Vector Machine (SVM) is a classification and regression method developed by Vapnik.30 In support vector regression (SVR), the input variables are first mapped into a higher dimensional feature space by the use of a kernel function, and then a linear model is constructed in this feature space. The kernel functions often used in SVM include linear, polynomial, radial basis function (RBF), and sigmoid function. The generalization performance of SVM depends on the selection of several internal parameters of the algorithm (C and e), the type of kernel, and the parameters of the kernel.31... [Pg.325]

Support vector machines In addition to more traditional classification methods like clustering or partitioning, other computational approaches have recently also become popular in chemoinformatics and support vector machines (SVMs) (Warmuth el al. 2003) are discussed here as an example. Typically, SVMs are applied as classifiers for binary property predictions, for example, to distinguish active from inactive compounds. Initially, a set of descriptors is selected and training set molecules are represented as vectors based on their calculated descriptor values. Then linear combinations of training set vectors are calculated to construct a hyperplane in descriptor space that best separates active and inactive compounds, as illustrated in Figure 1.9. [Pg.16]

Key words B-cell epitopes, Linear, Conformational, Support vector machines, Hidden Markov models, Immunoinformatics... [Pg.129]

Wang HW, Lin YC, Pai TW, Chang HT (2011) Prediction ofB-cell linear epitopes with a combination of support vector machine classification and amino acid propensity identification. J Biomed Biotechnol 2011(43) 28-30... [Pg.137]

Luan F, Zhang R, Zhao C, Yao X, Liu M, et al. Classification of the carcinogenicity of /V-nitroso compounds based on support vector machines and linear discriminant analysis. Chem Res Toxicol 2005 18 198-203. [Pg.204]

Panaye A, Fan BT, Doucet JP, Yao XJ, Zhang RS, Liu MC, et al. Quantitative structure-toxicity relationships (QSTRs) A comparative study of various non linear methods. General regression neural network, radial basis function neural network and support vector machine in predicting toxicity of nitro- and cyano-aromatics to Tetrahymena pyriformis. SAR QSAR Environ Res 2006 17 75-91. [Pg.235]

The K-PLS method can be reformulated to resemble support vector machines, but it can also be interpreted as a kernel and centering transformation of the descriptor data followed by a regular PLS method [99]. K-PLS was first introduced by Lindgren, Geladi, and Wold [143] in the context of working with linear kernels on data sets with more descriptor fields than data, in order to make the PLS modeling more efficient. Early applications of K-PLS were done mainly in this context [144-146]. The Parzen window, o, in the formula above is a free parameter that is determined by hyper-tuning on a validation set. For each dataset c is then held constant, independent of the various bootstrap splits. [Pg.407]


See other pages where Linear support vector machines is mentioned: [Pg.64]    [Pg.47]    [Pg.66]    [Pg.1317]    [Pg.306]    [Pg.64]    [Pg.47]    [Pg.66]    [Pg.1317]    [Pg.306]    [Pg.148]    [Pg.498]    [Pg.237]    [Pg.160]    [Pg.331]    [Pg.723]    [Pg.43]    [Pg.44]    [Pg.297]    [Pg.337]    [Pg.346]    [Pg.352]    [Pg.136]    [Pg.205]    [Pg.83]    [Pg.213]    [Pg.119]    [Pg.28]    [Pg.182]    [Pg.225]    [Pg.232]    [Pg.307]    [Pg.327]    [Pg.664]   
See also in sourсe #XX -- [ Pg.308 ]




SEARCH



Support vector machines

Support vectors

Supported vector machine

© 2024 chempedia.info