# SEARCH

** Discriminative functionalization **

The linear discriminant function is a most commonly used classification technique and it is available with all the most popular statistical software packages. It should be borne in mind, however, that it is only a simplification of the Bayes classifier and assumes that the variates are obtained from a multivariate normal distribution and that the groups have similar covariance matrices. If these conditions do not hold then the linear discriminant function should be used with care and the results obtained subject to careful analysis. [Pg.138]

Table 4. Standardized Linear Discriminant Function Coefficients [Pg.425]

TaUe 4 Discriminant scores using the linear discriminant function as classifier (a), and the resulting confusion matrix (b) [Pg.136]

Another nonparametric routine develops a linear discriminant function through an iterative least squares approach (22). The function is minimized [Pg.119]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes. [Pg.142]

Linear discriminant analysis (LDA) [41] separates two data classes of feature vectors by constructing a hyperplane defined by a linear discriminant function [Pg.222]

Calculate the centroids for each class, and hence the linear discriminant function given by (xA — xB).CAlB.x j for each object i. Represent this graphically. Suggest a cut-off value of dtis function which will discriminate most of the compounds. What is the percentage correctly classified [Pg.265]

Fig. 33.8. Situation where principal component (PC) and linear discriminant function (DF) are essentially the same (a) and very different (b). |

Least squares models, 39, 158 Linear combination, normalized, 65 Linear combination of variables, 64 Linear discriminant analysis, 134 Linear discriminant function, 132 Linear interpolation, 47 Linear regression, 156 Loadings, factor, 74 Lorentzian distribution, 14 [Pg.215]

Another routine develops a decision tree of binary choices which, taken as a whole, can classify the members of the training set. The decision tree generated implements a piecewise linear discriminant function. The decision tree is developed by splitting the data set into two parts in an optimal way at each node. A node is considered to be terminal when no advantageous split can be made a terminal node is labelled to associate with the pattern class most represented among the patterns present at the node. [Pg.119]

These results show that pattern recognition can be used as an effective tool to characterize polycyclic aromatic hydrocarbon carcinogens. Using a set of only 28 molecular structure descriptors, linear discriminants can be found to correctly dichotomize 191 out of 200 randomly selected PAH s. This same set of 28 descriptors supports a linear discriminant function that has an average predictive ability of over ninety percent when subjected to randomized predictive ability tests. [Pg.122]

Instead of using raw data, it is possible to use the PCs of the data. This acts as a form of variable reduction, but also simplifies the distance measures, because the variance-covariance matrix will only contain nonzero elements on the diagonals. The expressions for Mahalanobis distance and linear discriminant functions simplify dramatically. [Pg.242]

A further simplification can-be made to the Bayes classifier if the covariance matrices for both groups are known to be or assumed to be similar. This condition implies that the correlations between variables are independent of the group to which the objects belong. Extreme examples are illustrated in Figure 5. In such cases the groups are linearly separable and a linear discriminant function can be evaluated. [Pg.132]

** Discriminative functionalization **

© 2019 chempedia.info