Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Practical aspects of SVM classification

Up to this point we have given mostly a theoretical presentation of SVM classification and regression it is now appropriate to show some practical applications of support vector machines, together with practical guidelines for their application in cheminformatics and QSAR. In this section, we will present several case studies in SVM classification the next section is dedicated to applications of SVM regression. [Pg.350]

Studies investigating the universal approximation capabilities of support vector machines have demonstrated that SVM with usual kernels (such as polynomial, Gaussian RBF, or dot product kernels) can approximate any measurable or continuous function up to any desired accuracy. Any set of [Pg.350]

A common belief is that because SVM is based on structural risk minimization, its predictions are better than those of other algorithms that are based on empirical risk minimization. Many published examples show, however, that for real applications, such beliefs do not carry much weight and that sometimes other multivariate algorithms can deliver better predictions. [Pg.351]

An important question to ask is as follows Do SVMs overfit Some reports claim that, due to their derivation from structural risk minimization, SVMs do not overfit. However, in this chapter, we have already presented numerous examples where the SVM solution is overfitted for simple datasets. More examples will follow. In real applications, one must carefully select the nonlinear kernel function needed to generate a classification hyperplane that is topologically appropriate and has optimum predictive power. [Pg.351]

It is sometimes claimed that SVMs are better than artificial neural networks. This assertion is because SVMs have a unique solution, whereas artificial neural networks can become stuck in local minima and because the optimum number of hidden neurons of ANN requires time-consuming calculations. Indeed, it is true that multilayer feed-forward neural networks can offer models that represent local minima, but they also give constantly good solutions (although suboptimal), which is not the case with SVM (see examples in this section). Undeniably, for a given kernel and set of parameters, the SVM solution is unique. But, an infinite combination of kernels and SVM parameters exist, resulting in an infinite set of unique SVM models. The unique SVM solution therefore brings little comfort to the researcher because the theory cannot foresee which kernel and set of parameters are optimal for a [Pg.351]


See other pages where Practical aspects of SVM classification is mentioned: [Pg.350]    [Pg.351]    [Pg.353]    [Pg.355]    [Pg.359]    [Pg.361]    [Pg.499]    [Pg.350]    [Pg.351]    [Pg.353]    [Pg.355]    [Pg.359]    [Pg.361]    [Pg.499]   
See also in sourсe #XX -- [ Pg.350 ]




SEARCH



Practical aspects

Practical classifications

© 2024 chempedia.info