Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear classification, support vector

Support Vector Machines (SVMs) generate either linear or nonlinear classifiers depending on the so-called kernel [149]. The kernel is a matrix that performs a transformation of the data into an arbitrarily high-dimensional feature-space, where linear classification relates to nonlinear classifiers in the original space the input data lives in. SVMs are quite a recent Machine Learning method that received a lot of attention because of their superiority on a number of hard problems [150]. [Pg.75]

In this chapter, we will give a comprehensive introduction to support vector machine (SVM) in an accessible and self-contained way. The organization of this chapter is as follows We start from the central concepts about margin, from which the support vector methods are developed Second, the SVM for classification problems are introduced and the derivation in both linear and nonlinear cases will be described in detail Third, we discuss the support vector regression, i.e. the SVM in regression problems. At last, a variant of SVM, v-SVM is briefly introduced. [Pg.24]

In practice, a non-linear model is often required for adequate data fitting. In the same manner as the non-linear support vector classification approach, a non-linear mapping can be used to map the data into a high dimensional feature space where linear regression can be used (see Fig. 2.10). As noted in the previous subsection, the complete SVM can be described in terms of dot products between the data. The nonlinear SVR solution, using an f-insensitive loss function (2.54) is given by solving the problem ... [Pg.50]

In the example given here, the sample spectra were sufficiently different to allow classification by a relatively simple linear model (LDA). However, this type of modelling may not be successful if the sample spectra were more similar to each other. In addition, LDA does not perform well if the distribution of the data is non-normal. PLS-DA works slightly better in this situation, but generally kernel methods (e.g. Support Vector Machines) are necessary if the dataset is substantially nonlinear. [Pg.377]

The optimization problem from Eq. [20] represents the minimization of a quadratic function under linear constraints (quadratic programming), a problem studied extensively in optimization theory. Details on quadratic programming can be found in almost any textbook on numerical optimization, and efficient implementations exist in many software libraries. However, Eq. [20] does not represent the actual optimization problem that is solved to determine the OSH. Based on the use of a Lagrange function, Eq. [20] is transformed into its dual formulation. All SVM models (linear and nonlinear, classification and regression) are solved for the dual formulation, which has important advantages over the primal formulation (Eq. [20]). The dual problem can be easily generalized to linearly nonseparable learning data and to nonlinear support vector machines. [Pg.311]

Patterns that are not support vectors h = 0) do not influence the classification of new patterns. The use of Eq. [33] has an important advantage over using Eq. [32] to classify a new pattern x, it is only necessary to compute the dot product between x and every support vector. This results in a significant saving of computational time whenever the number of support vectors is small compared with the total number of patterns in the training set. Also, Eq. [33] can be easily adapted for nonlinear classifiers that use kernels, as we will show later. [Pg.314]

The separation surface may be nonlinear in many classification problems, but support vector machines can be extended to handle nonlinear separation surfaces by using feature functions < )(x). The SVM extension to nonlinear datasets is based on mapping the input variables into a feature space of a higher dimension (a Hilbert space of finite or infinite dimension) and then performing a linear classification in that higher dimensional space. For example, consider the set of nonlinearly separable patterns in Figure 28, left. It is... [Pg.323]


See other pages where Nonlinear classification, support vector is mentioned: [Pg.2]    [Pg.237]    [Pg.397]    [Pg.195]    [Pg.136]    [Pg.205]    [Pg.351]    [Pg.376]    [Pg.496]    [Pg.48]    [Pg.66]    [Pg.69]    [Pg.433]    [Pg.496]    [Pg.171]    [Pg.106]    [Pg.425]    [Pg.39]    [Pg.48]    [Pg.218]    [Pg.424]    [Pg.360]    [Pg.66]    [Pg.15]    [Pg.270]    [Pg.136]    [Pg.155]    [Pg.76]    [Pg.294]    [Pg.294]    [Pg.391]    [Pg.498]   


SEARCH



Support vectors

© 2024 chempedia.info