Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Margin and Optimal Separating Plane

1 Linear classification problem Suppose that we are given a set of training data [Pg.24]

The goal is to find some decision function g R - +1,-1 that can accurately predict the labels of unseen data (x,y). That is, the binary classification is performed by using a real-valued function, [Pg.24]

The problem of the classification can be transformed into finding a set of parameters w and b, the so called the weight vector and bias respectively in some literatures. Several simple iterative algorithms with [Pg.25]

The perceptron algorithm was proposed by Frank Rosenblatt in 1956 and has created a great deal of interest since then. It starts with an initial weight vector w and adapts it each time to a training point which is misclassified by the current weights. The algorithm is a mistake-driven procedure [42], i.e. the weight vector and bias are pnly updated on the misclassified examples. [Pg.26]

The following theorem shows that if the training sample is consistent with some simple perceptron, then this algorithm converges after a finite number of iterations. In this theorem, w and b define a decision boundary that correctly classifies all training examples, and every training sample point is at least having distance / from the decision boundary. [Pg.26]


See other pages where Margin and Optimal Separating Plane is mentioned: [Pg.24]   


SEARCH



Margin

Marginalization

Margining

Planes, separation

Separator optimized

© 2024 chempedia.info