Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Patterns to a Feature Space

After mapping all patterns from the learning set into the feature set, we obtain a set of points in the feature space R  [Pg.324]

The important property of the feature space is that the learning set ( )(T) might be linearly separable in the feature space if the appropriate feature functions are used, even when the learning set is not linearly separable in the original [Pg.324]

We consider a soft margin SVM in which the variables x are substituted with the feature vector )(x), which represents an optimization problem similar with that from Eq. [39]. Using this nonlinear SVM, the class of a pattern Xk is determined with Eq. [53]. [Pg.324]

The nonlinear classifier defined by Eq. [53] shows that to predict a pattern Xk, it is necessary to compute the dot product )(x,) )(X ) for all support vectors X,. This property of the nonlinear classifier is very important, because it shows that we do not need to know the actual expression of the feature function [). Moreover, a special class of functions, called kernels, allows the computation of the dot product ( )(x,) )(x ) in the original space defined by the training patterns. [Pg.324]

On the other hand, one can imagine a higher dimensional feature space in which these classes become linearly separable. The features are combinations of the input data, and for this example, we add x as a new dimension (Table 4, column 4). After this transformation, the dataset is represented in a three-dimensional feature space. [Pg.325]


See other pages where Patterns to a Feature Space is mentioned: [Pg.323]   


SEARCH



A-space

Pattern space

© 2024 chempedia.info