Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistical learning support vector machine

They mainly consist of a training data set and analyse this training data to learn relationships between data elements to produce an inferred function. They involve algorithms such as Bayesian statistics, decision tree (DT) learning, support vector machine (SVM), random forest (RF) and nearest neighbour algorithms. [Pg.136]

Support vector machine (SVM) is originally a binary supervised classification algorithm, introduced by Vapnik and his co-workers [13, 32], based on statistical learning theory. Instead of traditional empirical risk minimization (ERM), as performed by artificial neural network, SVM algorithm is based on the structural risk minimization (SRM) principle. In its simplest form, linear SVM for a two class problem finds an optimal hyperplane that maximizes the separation between the two classes. The optimal separating hyperplane can be obtained by solving the following quadratic optimization problem ... [Pg.145]

Vamek A, Baskin I (2012) Machine learning methods for property prediction in chemoinformatics quo vadis J Chem Inf Mod 52(6) 1413 1437. doi 10.1021/ci200409x Vapnik V (1998) Statistical learning theory. Wiley-Interscience, New York Scholkopf B, Smola AJ (2002) Learning with kernels support vector machines, regularization, optimization, and beyond. MIT, Cambridge... [Pg.456]

In this chapter, the basic principles of statistical learning theory will be introduced. And the possibility of application of support vector machine to various fields in chemistry and chemical technology will be discussed. [Pg.2]

The success of support vector machine stimulates many computer scientists to search various new methods of machine learning on the basis of the spirit of statistical learning theory. In order to control or depress the overfitting of artificial neural networks, an effective method is to minimize the weights of ANN, just as the minimization of w p in support vector regression. Based on this idea, we can have weight decay ANN (WD-ANN). [Pg.21]

Section 2.1 has attempted to determine the maximal margin hyperplane in an intuitive way. Support Vector Machine, the successful implementation of statistical learning theory (SLT), are built on the basis of the maximal margin hyperplane described above. It is important to reveal the relationship between the formula (2.16) and SLT. [Pg.32]

More recently, other statistical learning methods such as neural networks and support vector machines have been explored for predicting compounds of higher structural diversity than those covered by QSAR and QSPR approaches. These methods have shown promising potential in a number of studies. Many attempts have been made to elucidate the QSAR of antimicrobials by using different physicochemical parameters. [Pg.1344]

Support vector machines represent an extension to nonlinear models of the generalized portrait algorithm developed by Vapnik and Lemer. The SVM algorithm is based on the statistical learning theory and the Vapnik-Chervonenkis... [Pg.291]


See other pages where Statistical learning support vector machine is mentioned: [Pg.49]    [Pg.360]    [Pg.182]    [Pg.218]    [Pg.192]    [Pg.225]    [Pg.664]    [Pg.175]    [Pg.48]    [Pg.66]    [Pg.43]    [Pg.42]    [Pg.295]    [Pg.38]    [Pg.137]    [Pg.133]    [Pg.137]    [Pg.9]    [Pg.39]    [Pg.48]    [Pg.380]    [Pg.468]    [Pg.2]    [Pg.306]    [Pg.306]    [Pg.498]    [Pg.99]    [Pg.104]    [Pg.30]   
See also in sourсe #XX -- [ Pg.234 ]




SEARCH



Machine learning

Support vector machines

Support vectors

Supported vector machine

© 2024 chempedia.info