Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear SVM

Linear SVM Classifiers. When the data set is linearly separable, the decision function f x) = y(x) to separate the classes is given by ... [Pg.315]

This transformation into the higher-dimensional space is realized with a kernel function. The best function used depends on the initial data. In the SVM literature, typical kernel functions applied for classification are linear and polynomial kernels, or radial basis functions. Depending on the applied kernel function, some parameters must be optimized, for instance, the degree of the polynomial function (33,34). Once the data are transformed to another dimensional space by the kernel function, linear SVM can be applied. The main parameter to optimize with the SVM algorithm for nonseparable cases, as described in the previous section, is the regularization parameter, C. [Pg.316]

SVM and GRIND descriptors N = 495 compounds from the literature. Thresholds from 1 pM. N = 66 compounds from WOMBAT and PubChem Data on hERG for N = 1948 compounds. Non linear SVM with 40 pM cutoff shows best accuracy (72%) with WOMBAT data. 193... [Pg.320]

Support vector machine (SVM) is originally a binary supervised classification algorithm, introduced by Vapnik and his co-workers [13, 32], based on statistical learning theory. Instead of traditional empirical risk minimization (ERM), as performed by artificial neural network, SVM algorithm is based on the structural risk minimization (SRM) principle. In its simplest form, linear SVM for a two class problem finds an optimal hyperplane that maximizes the separation between the two classes. The optimal separating hyperplane can be obtained by solving the following quadratic optimization problem ... [Pg.145]

Using the SVM-BFS method, we select the features for the three artificial problems. The selected subset and its corresponding errors of each step are listed in Table 4.5. The SVM used in the feature selection of these three data sets are linear SVM, nonlinear SVM with radial basis function (RBF) kernel (cr = 1.54) and SVM with RBF kernel ([Pg.69]

For the quadratic and nonlinear data sets, SVM-BFS can detect the irrelevant feature at the first step, while it cannot detect the irrelevant one for the linear data set, we think the reason is that the nonlinear redundant features can cause larger error than the random feature using the linear SVM. [Pg.70]

Based on the considerations presented above, the OSH conditions from Eq. [19] can be formulated into the following expression that represents a linear SVM ... [Pg.311]

In previous sections, we introduced the linear SVM classification algorithm, which uses the training patterns to generate an optimum separation hyperplane. Such classifiers are not adequate for cases when complex relationships exist between input parameters and the class of a pattern. To discriminate linearly nonseparable classes of patterns, the SVM model can be fitted with nonlinear functions to provide efficient classifiers for hard-to-separate classes of patterns. [Pg.323]

Figure 43 Linear SVM regression case with soft margin and e-insensitive loss function. The primal objective function is represented by the Lagrange function... Figure 43 Linear SVM regression case with soft margin and e-insensitive loss function. The primal objective function is represented by the Lagrange function...
The RBF kernel (experiments 16-24), with ACp between 0.96 and 0.97, has better calibration statistics than the linear kernel, but its performance in prediction only equals that of the linear SVM. Although many tests were performed for the neural kernel (experiments 25-51), the prediction statistics are low, with ACp between 0.64 and 0.88. This result is surprising, because the tanh function gives very good results in neural networks. Even the training statistics are low for the neural kernel, with AC, between 0.68 and 0.89... [Pg.355]

The last set of SVM models were obtained with the anova kernel (experiments 52-78), with ACp between 0.94 and 0.98. In fact, only experiment 58 has a better prediction accuracy (ACp = 0.98) than the liner SVM model from experiment 3. The linear SVM has six errors in prediction (all nonpolar compounds predicted to be polar), whereas the anova SVM has four prediction errors, also for nonpolar compounds. [Pg.355]

Our experiments with various kernels show that the performance of the SVM classifier is strongly dependent on the kernel shape. Considering the results of the linear SVM as a reference, many nonlinear SVM models have lower prediction statistics. It is also true that the linear classifier does a good job and there is not much room for improvement. Out of the 75 nonlinear SVM models, only one, with the anova kernel, has slightly higher prediction statistics than the linear SVM. [Pg.355]

In Table 10, we show the best cross-validation results for each kernel type. The radial kernel has the best predictions, followed by the linear SVM model. The remaining kernels have worse predictions than does the linear model. [Pg.360]

Finally, compounds with bell-pepper aroma were considered to be in class - -1, whereas green and nutty pyrazines formed the class —1. Three kernels (RBF, polynomial, and anova) give much better predictions than does the linear SVM classifier linear, C = 10, ACp = 0.74 polynomial, degree 2, C = 10, ACp = 0.88 RBF, C = 10, y = 0.5, ACp = 0.89 neural, C = 100, a = 2, b = 1, ACp = 0.68 and anova, C = 10, y = 0.5, d = l, ACp = 0.87. We have to notice that the number of support vectors depends on the kernel type (linear, SV = 27 RBF, SV = 43 anova, SV = 31 all for training with all compounds), so for this structure-odor model, one might prefer the SVM model with a polynomial kernel that is more compact, i.e., contains a lower number of support vectors. [Pg.362]

ASVM, http //www.cs.wisc.edu/dmi/asvm/. ASVM (Active Support Vector Machine) is a very fast linear SVM script for MATLAB, by Musicant and Mangasarian, developed for large datasets. [Pg.390]


See other pages where Linear SVM is mentioned: [Pg.226]    [Pg.196]    [Pg.66]    [Pg.292]    [Pg.317]    [Pg.324]    [Pg.347]    [Pg.359]    [Pg.381]    [Pg.66]   
See also in sourсe #XX -- [ Pg.226 ]




SEARCH



SVM Classification for Linearly Separable Data

SVM for the Classification of Linearly Non-Separable Data

© 2024 chempedia.info