Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Decision plane

Harrington, P. D. Minimal neural networks—Concerted optimization of multiple decision planes. Chemom. Intell. Lab. Syst. 1993,18,157-170. [Pg.341]

Extended to a higher-dimensional case, say, to the r-dimensional case, a hyperplane (decision plane) is defined by the condition... [Pg.238]

In fact, b() + b xj gives the signed distance of an object jc, to the decision plane, and for classification only the sign is primarily important (although the distance from the decision plane may be used to measure the certainty of classification). If the two groups are linearly separable, one can find a hyperplane which gives a perfect group separation as follows ... [Pg.239]

Two groups of objects can be separated by a decision surface (defined by a discriminant variable). Methods using a decision plane and thus a linear discriminant variable (corresponding to a linear latent variable as described in Section 2.6) are LDA, PLS, and LR (Section 5.2.3). Only if linear classification methods have an insufficient prediction performance, nonlinear methods should be applied, such as classification trees (CART, Section 5.4), SVMs (Section 5.6), or ANNs (Section 5.5). [Pg.261]

Fig. 3.2 Typical applications using chemical multivariate data (schematically shown for 2-dimensional data) cluster analysis (a) separation of categories (b), discrimination by a decision plane and classification of unknowns (c) modelling categories and principal component analysis (d), feature selection (X2 is not relevant for category separation) (eY relationship between a continuous property Y and the features Xi and X2 (f)... Fig. 3.2 Typical applications using chemical multivariate data (schematically shown for 2-dimensional data) cluster analysis (a) separation of categories (b), discrimination by a decision plane and classification of unknowns (c) modelling categories and principal component analysis (d), feature selection (X2 is not relevant for category separation) (eY relationship between a continuous property Y and the features Xi and X2 (f)...
In a binary classification problem one has to distinguish only between two mutually exclusive classes (e.g. class 1 contains compounds with a certain chemical substructure and class 2 contains all other compounds). If the two classes form well separated clusters in the pattern space it is often possible to find a plane (decision plane) which separates the classes completely (Figure 3). In this case the data are called to be "linearly separable". The calculation of a suitable decision plane is often called the "training". [Pg.5]

FIGURE 3. A decision plane (a straight line in this 2-dimensional example) separates the two classes of objects and is defined by a decision vector w. [Pg.5]

The decision plane is usually defined by a decision vector (weight vector) w orthogonal to the plane and through the origin. The weight vector is very suitable to decide whether a point lies on the left or... [Pg.5]

Many considerations in the d-dimensional hyperspace are simplified if the decision plane and the decision vector pass through the origin- Such a decision plane was possible in the special case shown in Figure 1, but an extension of the data is necessary for general cases (Figured)-... [Pg.6]

FIGURE 6. No decision plane through the origin is possible. [Pg.7]

If all pattern vectors are augmented by an additional component (with the same arbitrary value in all patterns), then a decision plane which passes through the origin is possible as shown in Figure 7. [Pg.8]

Xjj (same value in all patterns) makes possible a decision plane through... [Pg.8]

FIGURE 10. The problem of the n/d-ratio for a trivial case. Number of patterns n = 3, number of dimensions d = 2. All 3 pattern points may be arbitrarily attached to class 1 or class 2 in any case a straight line ("decision plane") is obtained that separates the classes correctly. [Pg.12]

NuIticategory classifications require even greater values for n/d. if n/d is less than 3 for a binary classification the statistical significance of a decision plane is doubtful. A more detailed discussion of this problem is given in Chapter 10.4. [Pg.12]

Statistical significance of a decision plane and an objective mathematical evaluation of the classifier is essential but not necessarily sufficient for successful applications of a classifer. If the relationship between features and class membership is interpreted in terms of chemical parameters one has to be very cautious and always to remember the fundamental rule "correlation does not necessarily imply causation . [Pg.12]

If aLL pattern points of a certain class form a compact cluster in the pattern space, then this class can often be well represented by the centre of gravity (centroid) of the cluster. The centre of gravity is used as a prototype (template) of that class. An unknown pattern is classified into that class which is associated with the nearest centre of gravity. In other words the symmetry plane between the two centres of gravity is used as a decision plane (Figure 11),... [Pg.18]

FIGURE 11, Both classes form compact clusters and can be represented by the centres of gravity c and C2 The unknown x is classified to belong to class 1 because the distance to c is shorter than that to C2-The symmetry plane can be used as a decision plane. [Pg.18]

The convergence rate of the training is significantly increased if the initial weight vector already contains information about the clusters to be separated. A suitable way is to use the symmetry plane between the two centres of gravity for the beginning of the training (see Chapter 2-1.4, equation (16)). Additional use of the a priori probabilities of class 1 and 2 still improves the initial position of the decision plane. [Pg.33]

Widely used Is a reflexion of the old decision plane on the ml sclassif1ed pattern. After this correction the distance between x and the decision plane Is the same as before but x lies at the correct side of the plane. The same consideration Is valid for the scalar product s before correction and s after correction. [Pg.34]

Extensive studies by Jurs et. al. C1313 with mass spectra showed that a reflexion of the decision plane yields the highest convergence rate. Other methods of weight vector correction and a method with a small positive feedback upon correct classification were investigated by the same authors but did not give better results-... [Pg.35]

FIGURE 18. Subset training of the learning machine. The weight vector is corrected by reflexion of the decision plane at the mi sc lassified pattern. [Pg.36]

FIGURE 19. Unpleasant final positions of the decision plane after training with a learning machine. [Pg.38]

The randomness of the final position of a decision plane - computed with the learning machine - requires special cautions in the evaluation. [Pg.38]

If the number n of patterns in the training set is less than 3 times the number d of dimensions, a senseless decision plane may have been obtained. [Pg.38]

If the decision plane between two clusters is given a finite thickness, then the final position of the plane will be in the middle between the clusters (Figure 20). An optimum position of the decision plane can be achieved if the thickness is enlarged stepwise until the training does not converge. [Pg.39]

The scalar product s is used to measure the distance between a pattern X and the decision plane. Because s not only depends on the position of X and w but also on the length of the vectors, the weight vector must be normalized to a fixed length w (e.g. to length 1) Chapter 2-1.7. [Pg.39]

FIGURE 20. Decision plane with thickness 2t. The training process tries to find a decision plane with no patterns within a dead zone of 4% from the plane. [Pg.40]

Several chemical applications of the learning machine showed that a maximized thickness of the decision plane significantly increased the lierformance of the classifier C123, 167, 168, 233, 319, 3203. [Pg.40]

If the classifier is applied to unknown patterns, the dead zone from the training can be used as a rejection zone Patterns which give a scalar product between -t and +t are not classified- Chemical applications showed that the predictive ability for the remaining patterns outside the rejection zone is increased by this method C3203- The distance to the decision plane may be used as a measure of confidence for classifiers with a continuous response (Chapter 2.6.1.). [Pg.40]

FIGURE 21. Decision plane with a "negative thickness" defines a no--decision region for Linearly inseparable data. This trick enables the Learning machine to converge. [Pg.41]

Linear regression analysis allows to find a well defined, and in some way optimum decision plane even for overlapping clusters. [Pg.42]

For simplicity, one component with a fixed value is added to all pattern vectors in the same way as for other classification methods (Chapter 1.3.). Thus the decision plane obtained passes through the origin of the coordinate system. The total number of dimensions (including the added one) is denoted by d. [Pg.44]

The parameters w of the decision function form the decision vector w which is perpendicular to the decision plane required. The scalar product of decision vector w and pattern vector x gives the classification result. By definition, positive values refer to class 1 (z = +1) and negative values to class 2 (z = -1). [Pg.44]


See other pages where Decision plane is mentioned: [Pg.239]    [Pg.242]    [Pg.154]    [Pg.316]    [Pg.160]    [Pg.360]    [Pg.111]    [Pg.6]    [Pg.30]    [Pg.32]    [Pg.33]    [Pg.35]    [Pg.37]    [Pg.37]    [Pg.38]    [Pg.38]    [Pg.39]    [Pg.40]   
See also in sourсe #XX -- [ Pg.5 ]




SEARCH



© 2024 chempedia.info