Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

E -insensitive loss function

The classical multiple regression has a well-known loss function that is quadratic in the prediction errors. However, the loss function employed in SVR is the e-insensitive loss function. Here, the Toss is interpreted as a penally or error measure. Usage of e-insensitive loss function has the following implications. If the absolute residual is off-target by e or less, then there is no loss, that is, no penalty should be imposed. However, if the opposite is fine, that is absolute residual is off-taiget by an amount greater than s, then a certain amount of loss should be associated with the estimate. This loss rises linearly with the absolute residual above e. [Pg.152]

The use of SVM cannot solve all problems of noise in data processing, but it is possible to use SVM technique to improve noisy data processing in many ways. For example, it can provide some ways for outlier deleting By leave-one-out (LOO) cross-validation method, we can delete the data samples with large error in prediction, and make the improvement of data files. Besides, the adoption of e-insensitive loss function in support vector regression makes it more robust to noisy data sets. [Pg.6]

The error of the calculated value should be reduced when the values of coefficients become smaller. So the error of the calculated value can be depressed by minimization of w p in the equation obtained. Using e-insensitive loss function and minimizing the w p value are two principles of support vector regression. [Pg.20]

Since the degree of carcinogenic activity can only be expressed semi-quantitatively, SVR with e-insensitive loss function is especially suitable to investigate this problem. Figure 12.3 illustrates the comparison of the actual and calculated values of degree of activity of carcinogenesis of 43 polycyclic aromatic hydrocarbons. [Pg.256]

In order to find the concrete condition to avoid the formation of quench stain, SVR has been used to process the data of production process. In production practice, the tinplate products with quenching stains are classified to five classes according to the seriousness by inspection of the appearance. This is only a semi-quantitative index to describe the degree of fault of products. But it is understandable that such kind of data set may be treated by SVR with e-insensitive loss function with good results. Figure 14.6 illustrates the results of computation. It can be seen the regularity is rather clear. [Pg.283]

Figure 43 Linear SVM regression case with soft margin and e-insensitive loss function. The primal objective function is represented by the Lagrange function... Figure 43 Linear SVM regression case with soft margin and e-insensitive loss function. The primal objective function is represented by the Lagrange function...
Figure 2.9 shows the form of the linear and quadratic -insensitive loss function for zero and non-zero e. [Pg.45]

As for SVC, optimisation of the SVM parameters for regression is far from simple and, as mentioned, a trade-off is implicit in the use of the loss function. The penalty parameter (C) and e must be considered in order to obtain a robust regression, insensitive to the presence of outliers. However, if non-linear kernel models are used (typically, the RBF) another term, the width of the Gaussian function, must be taken into account. Further, as e defines the radius of the -tube around the regression function it also defines the number of SVs that are finally selected to construct the function. An excessively large value of e results in fewer SVs (more experimental data within the a-tube) and, therefore, the model may over-fit the calibration data. In this respect, Brereton and Loyd stressed how easy it is to obtain over-fitted models that yield errors that are too large when unknowns are to be predicted. More details and practical examples can be found in both references. [Pg.397]

Figure 44 Loss functions for support vector machines regression (a) quadratic (b) Laplace (c) Huber (d) e-insensitive. Figure 44 Loss functions for support vector machines regression (a) quadratic (b) Laplace (c) Huber (d) e-insensitive.

See other pages where E -insensitive loss function is mentioned: [Pg.340]    [Pg.341]    [Pg.12]    [Pg.221]    [Pg.167]    [Pg.481]    [Pg.225]    [Pg.297]    [Pg.708]    [Pg.363]    [Pg.388]    [Pg.340]   
See also in sourсe #XX -- [ Pg.19 , Pg.20 , Pg.44 ]




SEARCH



E function

Insensitive

Insensitivity

Loss function

© 2024 chempedia.info