Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear support vector machines

Equation [50] depends only on the support vectors and their Lagrange multipliers, and the optimum value for b, showing that one does not need to compute w explicitly in order to predict the classification of new patterns. [Pg.323]

In previous sections, we introduced the linear SVM classification algorithm, which uses the training patterns to generate an optimum separation hyperplane. Such classifiers are not adequate for cases when complex relationships exist between input parameters and the class of a pattern. To discriminate linearly nonseparable classes of patterns, the SVM model can be fitted with nonlinear functions to provide efficient classifiers for hard-to-separate classes of patterns. [Pg.323]


Tang, L.-J., Zhou, Y.-P., Jiang, J.-H., Zou, H.-Y, Wu, H.-L., Shen, G.-L. and Yu, R.-Q. (2007) Radial basis function network-based transform for a nonlinear support vector machine as optimized by a particle swarm optimization algorithm with application to QSAR studies./. Chem. Inf. Model, 47,1438-1445. [Pg.1180]

The optimization problem from Eq. [20] represents the minimization of a quadratic function under linear constraints (quadratic programming), a problem studied extensively in optimization theory. Details on quadratic programming can be found in almost any textbook on numerical optimization, and efficient implementations exist in many software libraries. However, Eq. [20] does not represent the actual optimization problem that is solved to determine the OSH. Based on the use of a Lagrange function, Eq. [20] is transformed into its dual formulation. All SVM models (linear and nonlinear, classification and regression) are solved for the dual formulation, which has important advantages over the primal formulation (Eq. [20]). The dual problem can be easily generalized to linearly nonseparable learning data and to nonlinear support vector machines. [Pg.311]

Support Vector Machines (SVMs) generate either linear or nonlinear classifiers depending on the so-called kernel [149]. The kernel is a matrix that performs a transformation of the data into an arbitrarily high-dimensional feature-space, where linear classification relates to nonlinear classifiers in the original space the input data lives in. SVMs are quite a recent Machine Learning method that received a lot of attention because of their superiority on a number of hard problems [150]. [Pg.75]

Panaye A, Fan BT, Doucet JP, Yao XJ, Zhang RS, et al. Quantitative structure-toxicity relationships (QSTRs) A comparative study of various nonlinear methods. General regression neural network, radial basis function neural network and support vector machine in predicting toxicity of nitro- and cyano- aromatics to Tetrahymena pyriformis. SAR QSAR Environ Res 2006 17 75-91. [Pg.198]


See other pages where Nonlinear support vector machines is mentioned: [Pg.64]    [Pg.47]    [Pg.323]    [Pg.323]    [Pg.325]    [Pg.327]    [Pg.329]    [Pg.331]    [Pg.333]    [Pg.335]    [Pg.337]    [Pg.339]    [Pg.64]    [Pg.47]    [Pg.323]    [Pg.323]    [Pg.325]    [Pg.327]    [Pg.329]    [Pg.331]    [Pg.333]    [Pg.335]    [Pg.337]    [Pg.339]    [Pg.498]    [Pg.237]    [Pg.397]    [Pg.172]    [Pg.181]    [Pg.195]    [Pg.346]    [Pg.352]    [Pg.136]    [Pg.205]    [Pg.83]    [Pg.351]    [Pg.254]    [Pg.326]    [Pg.376]    [Pg.59]    [Pg.416]    [Pg.496]    [Pg.83]    [Pg.48]    [Pg.66]    [Pg.69]    [Pg.330]    [Pg.433]    [Pg.230]    [Pg.496]    [Pg.171]    [Pg.106]    [Pg.270]    [Pg.425]    [Pg.39]    [Pg.48]   
See also in sourсe #XX -- [ Pg.323 ]




SEARCH



Support vector machines

Support vectors

Supported vector machine

© 2024 chempedia.info