Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hyperplane classifier

Figure 22 The optimal hyperplane classifier obtained with all training patterns (a) is identical with the one computed with only the support vector patterns (b). Figure 22 The optimal hyperplane classifier obtained with all training patterns (a) is identical with the one computed with only the support vector patterns (b).
Discriminant emalysis is a supervised learning technique which uses classified dependent data. Here, the dependent data (y values) are not on a continuous scale but are divided into distinct classes. There are often just two classes (e.g. active/inactive soluble/not soluble yes/no), but more than two is also possible (e.g. high/medium/low 1/2/3/4). The simplest situation involves two variables and two classes, and the aim is to find a straight line that best separates the data into its classes (Figure 12.37). With more than two variables, the line becomes a hyperplane in the multidimensional variable space. Discriminant analysis is characterised by a discriminant function, which in the particular case of hnear discriminant analysis (the most popular variant) is written as a linear combination of the independent variables ... [Pg.719]

Because a hyperplane corresponds to a boundary between pattern classes, such a discriminant function naturally forms a decision rule. The global nature of this approach is apparent in Fig. 19. An infinitely long decision line is drawn based on the given data. Regardless of how closely or distantly related an arbitrary pattern is to the data used to generate the discriminant, the pattern will be classified as either o>i or <02. When the arbitrary pattern is far removed from the data used to generate the discriminant, the approach is extremely prone to extrapolation errors. [Pg.49]

Support vector machines In addition to more traditional classification methods like clustering or partitioning, other computational approaches have recently also become popular in chemoinformatics and support vector machines (SVMs) (Warmuth el al. 2003) are discussed here as an example. Typically, SVMs are applied as classifiers for binary property predictions, for example, to distinguish active from inactive compounds. Initially, a set of descriptors is selected and training set molecules are represented as vectors based on their calculated descriptor values. Then linear combinations of training set vectors are calculated to construct a hyperplane in descriptor space that best separates active and inactive compounds, as illustrated in Figure 1.9. [Pg.16]

Figure 13.10 SVM training results in the optimal hyperplane separating classes of data. The optimal hyperplane is the one with the maximum distance from the nearest training patterns (support vectors). The three support vectors defining the hyperplane are shown as solid symbols. D(x) is the SVM decision function (classifier function). Figure 13.10 SVM training results in the optimal hyperplane separating classes of data. The optimal hyperplane is the one with the maximum distance from the nearest training patterns (support vectors). The three support vectors defining the hyperplane are shown as solid symbols. D(x) is the SVM decision function (classifier function).
The essence of the differences between the operation of radial basis function networks and multilayer perceptrons can be seen in Figure 4.1, which shows data from the hypothetical classification example discussed in Chapter 3. Multilayer perceptrons classify data by the use of hyperplanes that divide the data space into discrete areas radial basis functions, on the other hand, cluster the data into a finite number of ellipsoid regions. Classification is then a matter of finding which ellipsoid is closest for a given test data point. [Pg.41]

Hence only the points x, which satisfy, change equation will have non-zero Lagrange multipliers. These points are termed Support Vectors (SV). All the SVs will lie on the margin and hence the number of SVs can be very small. Consequently the hyperplane is determined by a small subset of the training set. Hence the solution to the optimal classified problem is given by. [Pg.172]

Nonparallel plane proximal classifier (NPPC) is a recently developed kernel classifier that classifies a pattern by the proximity of it to one of the two nonparallel hyperplanes [21, 22]. The advantage of the NPPC is that its training can be accomplished by solving two systems of linear equations instead of solving a quadratic program as it requires for training standard SVM classifiers [17, 18] and its performance is comparable to that of the SVM classifier. This fact motivated us to evaluate the performance of multiclass-NPPC in tea quality prediction. [Pg.149]

NPPC [22] is a binary classifier and it classifies a pattern by the proximity of a test pattern to one of the two planes as shown in Fig. 5. The two planes are obtained by solving two nonlinear programming problems (NPP) with a quadratic form of loss function. Each plane is clustered around a particular class of data by minimizing sum squared distances of patterns from it and considering the patterns of the others class at a distance of 1 with soft errors. Thus, the objective of NPPC is to find two hyperplanes ... [Pg.150]

This idea of classifying a pattern by the proximity to one of the two hyperplanes of binary NPPC can be extended to multiclass NPPC by using decomposition techniques as described in previous section for SVM classifiers. In [20], the authors have discussed these methods in details and therefore omitted here. [Pg.152]

S. Ghorai, S.J. Hossain, A. Mukheijee, P.K. Dutta, Newton method for nonparallel plane proximal classifier with unity norm hyperplanes. Sign. Proces. 90(1), 93-104 (2010)... [Pg.159]

In OVR method k numbers of binary classifier models are constructed for a k class problem [23]. In doing so the patterns of ith class considered as the positive samples and the patterns of all other classes are considered as the negative samples for the ith binary classifier model. This decomposition method has been shown in Fig. 1 for a three class problem which shows that it builds three classifiers (a) class 1 versus classes 2 and 3, (b) class 2 versus classes 1 and 3, and (c) class 3 versus classes 1 and 2. The combined OVR decision function chooses the class of a sample that corresponds to the maximum value of the argument of k binary decision functions specified by the furthest positive hyperplane. [Pg.197]

SVR) maximizes the prediction accuracy of the classifier (regression) model while simultaneously escaping from data overfitting. In SVM, the inputs are first nonlin-early mapped into a high-dimensional feature space (O) wherein they are classified using a linear hyperplane (Fig. 3.4). [Pg.138]

We sometimes refer to the minimum of the margin distribution as the (functional) margin of a hyperplane (w,6) on a training set S [42]. Hence, the problem of seeking the maximal margin classifier is to determine whether there exists some (w, 6 ) by solving the following problem ... [Pg.29]

Figure 14 Using the linear classifier defined by the hyperplane H, the pattern is predicted to belong to the class —1. Figure 14 Using the linear classifier defined by the hyperplane H, the pattern is predicted to belong to the class —1.
A hyperplane w x + b = 0 can be denoted as a pair (w, b). A training set of patterns is linearly separable if at least one linear classifier exists defined by the pair (w, b), which correctly classifies all training patterns (see Figure 15). All patterns from class +1 are located in the space region defined by w X + > 0, and all patterns from class —1 are located in the space region... [Pg.304]


See other pages where Hyperplane classifier is mentioned: [Pg.302]    [Pg.306]    [Pg.302]    [Pg.306]    [Pg.46]    [Pg.240]    [Pg.46]    [Pg.359]    [Pg.359]    [Pg.361]    [Pg.322]    [Pg.225]    [Pg.363]    [Pg.418]    [Pg.143]    [Pg.432]    [Pg.148]    [Pg.78]    [Pg.273]    [Pg.138]    [Pg.147]    [Pg.196]    [Pg.395]    [Pg.396]    [Pg.15]    [Pg.28]    [Pg.30]    [Pg.32]    [Pg.33]    [Pg.38]    [Pg.292]    [Pg.136]    [Pg.276]    [Pg.294]   
See also in sourсe #XX -- [ Pg.302 ]




SEARCH



Classified

Classifier

Classifying

Hyperplanes

© 2024 chempedia.info