Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kernels Anova

A useful function is the anova kernel, whose shape is controlled by the parameters y and d ... [Pg.332]

Frequently the exclusive use of the RBF kernel is rationalized by mentioning that it is the best possible kernel for SVM models. The simple tests presented in this chapter (datasets from Tables 1-6) surest that other kernels might be more useful for particular problems. For a comparative evaluation, we review below several SVM classification models obtained with five important kernels (linear, polynomial, Gaussian radial basis function, neural, and anova) and show that the SVM prediction capability varies significantly with the kernel type and parameters values used and that, in many cases, a simple linear model is more predictive than nonlinear kernels. [Pg.352]

The last set of SVM models were obtained with the anova kernel (experiments 52-78), with ACp between 0.94 and 0.98. In fact, only experiment 58 has a better prediction accuracy (ACp = 0.98) than the liner SVM model from experiment 3. The linear SVM has six errors in prediction (all nonpolar compounds predicted to be polar), whereas the anova SVM has four prediction errors, also for nonpolar compounds. [Pg.355]

Our experiments with various kernels show that the performance of the SVM classifier is strongly dependent on the kernel shape. Considering the results of the linear SVM as a reference, many nonlinear SVM models have lower prediction statistics. It is also true that the linear classifier does a good job and there is not much room for improvement. Out of the 75 nonlinear SVM models, only one, with the anova kernel, has slightly higher prediction statistics than the linear SVM. [Pg.355]

The table reports the experiment number Exp, capacity parameter C, kernel type K (linear L polynomial P radial basis function R neural N anova A), and corresponding parameters, calibration results (TP,., true positive in calibration FN, false negative in calibration TN,-, true negative in calibration FP, false positive in calibration SV number of support vectors in calibration AC calibration accuracy), and L20%O prediction results (TPp, true positive in prediction FNp, false negative in prediction TNp, true negative in prediction FPp, false positive in prediction SVp, average number of support vectors in prediction ACp, prediction accuracy). [Pg.358]

The best cross-validation results for each kernel type are presented in Table 9. The linear, polynomial, RBF, and anova kernels have similar results that are of reasonably quality, whereas the neural kernel has very bad statistics the slight classification improvement obtained for the RBF and anova kernels is not statistically significant. [Pg.359]

Finally, compounds with bell-pepper aroma were considered to be in class - -1, whereas green and nutty pyrazines formed the class —1. Three kernels (RBF, polynomial, and anova) give much better predictions than does the linear SVM classifier linear, C = 10, ACp = 0.74 polynomial, degree 2, C = 10, ACp = 0.88 RBF, C = 10, y = 0.5, ACp = 0.89 neural, C = 100, a = 2, b = 1, ACp = 0.68 and anova, C = 10, y = 0.5, d = l, ACp = 0.87. We have to notice that the number of support vectors depends on the kernel type (linear, SV = 27 RBF, SV = 43 anova, SV = 31 all for training with all compounds), so for this structure-odor model, one might prefer the SVM model with a polynomial kernel that is more compact, i.e., contains a lower number of support vectors. [Pg.362]

In this section, we compared the prediction capabilities of five kernels, namely linear, polynomial, Gaussian radial basis function, nemal, and anova. Several guidelines that might help the modeler obtain a predictive SVM model can be extracted from these results (1) It is important to compare the predictions of a large number of kernels and combinations of parameters (2) the linear kernel should be used as a reference to compare the results from nonlinear kernels (3) some datasets can be separated with a linear hyperplane in such instances, the use of a nonlinear kernel should be avoided and (4) when the relationships between input data and class attribution are nonlinear, RBF kernels do not necessarily give the optimum SVM classifier. [Pg.362]

Table 15 contains the best SVM regression results for each kernel. The cross-validation results show that the correlation coefficient decreases in the following order of kernels linear > degree 2 polynomial > neural > RBF > anova. The MLR and SVMR linear models are very similar, and both are significantly better than the SVM models obtained with nonlinear kernels. The inability of nonlinear models to outperform the linear ones can be attributed to the large experimental errors in determining BCF. [Pg.370]


See other pages where Kernels Anova is mentioned: [Pg.332]    [Pg.352]    [Pg.359]    [Pg.361]    [Pg.362]    [Pg.362]    [Pg.362]    [Pg.364]    [Pg.365]    [Pg.388]   
See also in sourсe #XX -- [ Pg.332 ]




SEARCH



ANOVA

© 2024 chempedia.info