Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kernel function polynomial

Support Vector Machine (SVM) is a classification and regression method developed by Vapnik.30 In support vector regression (SVR), the input variables are first mapped into a higher dimensional feature space by the use of a kernel function, and then a linear model is constructed in this feature space. The kernel functions often used in SVM include linear, polynomial, radial basis function (RBF), and sigmoid function. The generalization performance of SVM depends on the selection of several internal parameters of the algorithm (C and e), the type of kernel, and the parameters of the kernel.31... [Pg.325]

Dorao and Jakobsen [40, 41] did show that the QMOM is ill conditioned (see, e.g.. Press et al [149]) and not reliable when the complexity of the problem increases. In particular, it was shown that the high order moments are not well represented by QMOM, and that the higher the order of the moment, the higher the error becomes in the predictions. Besides, the nature of the kernel functions determine the number of moments that must be used by QMOM to reach a certain accuracy. The higher the polynomial order of the kernel functions, the higher the number of moments required for getting reliable predictions. This can reduce the applicability of QMOM in the simulation of fluid particle flows where the kernel functions can have quite complex functional dependences. On the other hand, QMOM can still be used in some applications where the kernel functions are given as low order polynomials like in some solid particle or crystallization problems. [Pg.1090]

This transformation into the higher-dimensional space is realized with a kernel function. The best function used depends on the initial data. In the SVM literature, typical kernel functions applied for classification are linear and polynomial kernels, or radial basis functions. Depending on the applied kernel function, some parameters must be optimized, for instance, the degree of the polynomial function (33,34). Once the data are transformed to another dimensional space by the kernel function, linear SVM can be applied. The main parameter to optimize with the SVM algorithm for nonseparable cases, as described in the previous section, is the regularization parameter, C. [Pg.316]

Popular choices of kernel functions for SVMs are as follows Polynomial of degree p with parameters and 2-... [Pg.200]

When Hybrid kernel SVM improved by GA is used as IDS, the overall performance of IDS can improved. Experiment results shown that this method was useful and the detection right rate of intruders was above 95% for the KDD CUP 1999 data. The study is motivated by the need to effectively control misclassification error and enhance the learning and generation performance of SVMs. We turn to hybird-kernel SVM composed of Polynomial kernel and RBF kernel which have been proved to be of compensatory characteristics. In order to control misclassification error, we use GA to optimize the parameters of hybird kernel function. All of above represent the feasibility and advantage of our method. But there are many respects which need the further study inevitably. [Pg.174]

The expression (8) is used for testing a new pattern by the trained classifier. There are many possible kernels, such as linear, Gaussian, polynomial and multilayer percep-tron etc. In this study, we have used polynomial and Gaussian (RBF) kernel functions, respectively, of the form as given in (9) and (10) below ... [Pg.147]

Where the x s are the two vectors of experimental data for samples 1 and 2, ( ) is the transpose of a vector, and a and b are constants. An appealing property of SVMs is that the a priori complex step of non-linear mapping can be calculated in the original space by the kernel functions after some key parameters are optimised. This means that the new dimensions arise from combinations of experimental variables. Another curious property is that the kernel itself yields a measure of the similarity between two samples in the feature space, using just the original data This is called the "kernel trick. Further, it is not necessary for the analyst to know the mathematical functions behind the kernel in advance. Once the type of kernel is selected e.g. linear, RBF or polynomial) the non-linear mapping functions will be set automatically. ... [Pg.393]

Considering the CS2 example, several assays were made considering four different kernels linear, polynomial (with different degrees), radial basis functions (RBF) and a sigmoid type. The RMSEC and the RMSEP errors for calibration and validation were considered in order to select a model and a satisfactory trade-off searched for. As expected, some good fits yielded nonuseful predictions for the unknowns. [Pg.398]

Figure 9.13 shows that the polynomial kernel function with degree equal to 2 is the best choice. And the minimum of MRE appears with the f = 0.05. [Pg.218]

Based on the above-mentioned results, the polynomial kernel function (degree=2) with C=80 and =0.05 has been used for modeling by SVR. The mathematical model obtained can be expressed as follows ... [Pg.218]

Since the prediction ability of support vector machine is dependent on the selection of kernels and the parameter C. The rate of correctness of computerized prediction tested by LOO cross-validation method has been used as the criterion of the optimization of method of SVC computation. Four kinds of kernels (linear kernel, polynomial kernel of second degree, Gaussian kernel and sigmoid kernel functions) with 10[Pg.269]

Frequently the exclusive use of the RBF kernel is rationalized by mentioning that it is the best possible kernel for SVM models. The simple tests presented in this chapter (datasets from Tables 1-6) surest that other kernels might be more useful for particular problems. For a comparative evaluation, we review below several SVM classification models obtained with five important kernels (linear, polynomial, Gaussian radial basis function, neural, and anova) and show that the SVM prediction capability varies significantly with the kernel type and parameters values used and that, in many cases, a simple linear model is more predictive than nonlinear kernels. [Pg.352]

Recall from Section 1.5 that any function in the kernel of the Laplacian (on any space of functions) is called a harmonic function. In other words, a function f is harmonic if V / = 0. The harmonic functions in the example just above are the harmonic homogeneous polynomials of degree two. We call this vector space In Exercise 2,23 we invite the reader to check that the following set is a basis of H/ ... [Pg.53]

Figure 7. MSB value on the test data for the indicator 3 in function of the h value and the kernel type (diamond Gaussian, square polynomial). Figure 7. MSB value on the test data for the indicator 3 in function of the h value and the kernel type (diamond Gaussian, square polynomial).
Linear kernel Polynomial of degree d Radial base function Sigmoid function ... [Pg.236]


See other pages where Kernel function polynomial is mentioned: [Pg.66]    [Pg.88]    [Pg.171]    [Pg.171]    [Pg.127]    [Pg.134]    [Pg.140]    [Pg.48]    [Pg.59]    [Pg.392]    [Pg.43]    [Pg.209]    [Pg.214]    [Pg.227]    [Pg.260]    [Pg.1195]    [Pg.1196]    [Pg.432]    [Pg.296]    [Pg.362]    [Pg.364]    [Pg.129]    [Pg.144]    [Pg.107]    [Pg.64]    [Pg.295]    [Pg.29]    [Pg.212]    [Pg.214]    [Pg.173]    [Pg.47]    [Pg.143]    [Pg.170]    [Pg.238]    [Pg.240]   
See also in sourсe #XX -- [ Pg.59 ]




SEARCH



Function polynomial

Kernel functionals

Polynomial

© 2024 chempedia.info