Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimum separation hyperplane

Figure 6.10 An example of linearly separable classes where different frontiers (hyperplanes or borders) are possible (a) concept of optimum separated hyperplane, margin and support vectors (b) concept of slack variables (c). Figure 6.10 An example of linearly separable classes where different frontiers (hyperplanes or borders) are possible (a) concept of optimum separated hyperplane, margin and support vectors (b) concept of slack variables (c).
The optimum separation hyperplane (OSFI) is the hyperplane with the maximum margin for a given finite set of learning patterns. The OSH computation with a linear support vector machine is presented in this section. [Pg.308]

The problem of finding the optimum separation hyperplane is represented by the identification of the linear classifier (w, b), which satisfies... [Pg.311]

In the previous section, we presented the SVM algorithm for training a linear classifier. The result of this training is an optimum separation hyperplane defined by (w, b) (Eqs. [29] and [31]). After training, the classifier is ready to predict the class membership for new patterns, different from those used in training. The class of a pattern is determined with... [Pg.313]

In previous sections, we introduced the linear SVM classification algorithm, which uses the training patterns to generate an optimum separation hyperplane. Such classifiers are not adequate for cases when complex relationships exist between input parameters and the class of a pattern. To discriminate linearly nonseparable classes of patterns, the SVM model can be fitted with nonlinear functions to provide efficient classifiers for hard-to-separate classes of patterns. [Pg.323]

The vector w that determines the optimum separation hyperplane is... [Pg.334]

To illustrate the influence of the capacity parameter C on the separation hyperplane with the dataset from Table 5 and a polynomial kernel of degree 2, consider Figures 40 (a, C = 100 b, C = 10) and 41 (a, C = 1 b, C = 0.1). This example shows that a bad choice for the capacity C can ruin the performance of an otherwise very good classifier. Empirical observations suggest that C = 100 is a good value for a wide range of SVM classification problems, but the optimum value should be determined for each particular case. [Pg.336]

Fig. 3.5 a A schematic of possible hyperplanes for linearly separable data, b Optimum hyperplane located by SVM and the corresponding support vectors... [Pg.138]

In this section, we compared the prediction capabilities of five kernels, namely linear, polynomial, Gaussian radial basis function, nemal, and anova. Several guidelines that might help the modeler obtain a predictive SVM model can be extracted from these results (1) It is important to compare the predictions of a large number of kernels and combinations of parameters (2) the linear kernel should be used as a reference to compare the results from nonlinear kernels (3) some datasets can be separated with a linear hyperplane in such instances, the use of a nonlinear kernel should be avoided and (4) when the relationships between input data and class attribution are nonlinear, RBF kernels do not necessarily give the optimum SVM classifier. [Pg.362]


See other pages where Optimum separation hyperplane is mentioned: [Pg.311]    [Pg.311]    [Pg.314]    [Pg.317]    [Pg.318]    [Pg.498]    [Pg.311]    [Pg.311]    [Pg.314]    [Pg.317]    [Pg.318]    [Pg.498]    [Pg.394]    [Pg.306]    [Pg.491]   
See also in sourсe #XX -- [ Pg.308 , Pg.311 , Pg.318 , Pg.334 ]




SEARCH



Hyperplanes

Separating hyperplane

© 2024 chempedia.info