Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear discriminant functions

Fig. 33.8. Situation where principal component (PC) and linear discriminant function (DF) are essentially the same (a) and very different (b). Fig. 33.8. Situation where principal component (PC) and linear discriminant function (DF) are essentially the same (a) and very different (b).
Once determined, these parameters can be used to create linear discriminant functions of the form... [Pg.50]

Table 4. Standardized Linear Discriminant Function Coefficients... [Pg.425]

Initially an optimised model was constructed using the data collected as outlined above by constructing a principal component (PC)-fed linear discriminant analysis (LDA) model (described elsewhere) [7, 89], The linear discriminant function was calculated for maximal group separation and each individual spectral measurement was projected onto the model (using leave-one-out cross-validation) to obtain a score. The scores for each individual spectrum projected onto the model and colour coded for consensus pathology are shown in Fig. 13.3. The simulation experiments used this optimised model as a baseline to compare performance of models with spectral perturbations applied to them. The optimised model training performance achieved 93% accuracy overall for the three groups. [Pg.324]

In supervised pattern recognition, a major aim is to define the distance of an object from the centre of a class. There are two principle uses of statistical distances. The first is to obtain a measurement analogous to a score, often called the linear discriminant function, first proposed by the statistician R A Fisher. This differs from the distance above in that it is a single number if there are only two classes. It is analogous to the distance along line 2 in Figure 4.26, but defined by... [Pg.237]

The calculation of the linear discriminant function is presented in Table 4.25 and the values are plotted in Figure 4.30. It can be seen that objects 5, 6, 12 and 18 are not easy to classify. The centroids of each class in this new dataset using the linear discriminant function can be calculated, and the distance from these values could be calculated however, this would result in a diagram comparable to Figure 4.27, missing information obtained by taking two measurements. [Pg.239]

Instead of using raw data, it is possible to use the PCs of the data. This acts as a form of variable reduction, but also simplifies the distance measures, because the variance-covariance matrix will only contain nonzero elements on the diagonals. The expressions for Mahalanobis distance and linear discriminant functions simplify dramatically. [Pg.242]

Calculate the centroids for each class, and hence the linear discriminant function given by (xA — xB).CAlB.x j for each object i. Represent this graphically. Suggest a cut-off value of dtis function which will discriminate most of the compounds. What is the percentage correctly classified ... [Pg.265]

Another parametric routine implements a discriminant function by the method commonly called linear discriminant function analysis. It is nearly identical to the linear Bayesian discriminant, except that instead of using the covariance matrix, the sum of cross-products matrix is used. Results obtained with the routine are ordinarily very similar to those obtained using the linear Bayes routine. The routine implemented as LDFA is a highly modified version of program BMD04M taken from the Biomedical Computer Programs Package (47). [Pg.118]

Another nonparametric routine develops a linear discriminant function through an iterative least squares approach (22). The function is minimized ... [Pg.119]

A linear discriminant function can be found using a linear programming approach (48,49). The objective function to be optimized consists of the fraction of the training set correctly classified. If two vertices have the same classification ability, then the vertex with the smaller sum of distances to misclassified points is taken as better. [Pg.119]

Another routine develops a decision tree of binary choices which, taken as a whole, can classify the members of the training set. The decision tree generated implements a piecewise linear discriminant function. The decision tree is developed by splitting the data set into two parts in an optimal way at each node. A node is considered to be terminal when no advantageous split can be made a terminal node is labelled to associate with the pattern class most represented among the patterns present at the node. [Pg.119]

These results show that pattern recognition can be used as an effective tool to characterize polycyclic aromatic hydrocarbon carcinogens. Using a set of only 28 molecular structure descriptors, linear discriminants can be found to correctly dichotomize 191 out of 200 randomly selected PAH s. This same set of 28 descriptors supports a linear discriminant function that has an average predictive ability of over ninety percent when subjected to randomized predictive ability tests. [Pg.122]

Linear discriminant analysis (LDA) [41] separates two data classes of feature vectors by constructing a hyperplane defined by a linear discriminant function ... [Pg.222]

A further simplification can-be made to the Bayes classifier if the covariance matrices for both groups are known to be or assumed to be similar. This condition implies that the correlations between variables are independent of the group to which the objects belong. Extreme examples are illustrated in Figure 5. In such cases the groups are linearly separable and a linear discriminant function can be evaluated. [Pg.132]

TaUe 4 Discriminant scores using the linear discriminant function as classifier (a), and the resulting confusion matrix (b)... [Pg.136]

The linear discriminant function is a most commonly used classification technique and it is available with all the most popular statistical software packages. It should be borne in mind, however, that it is only a simplification of the Bayes classifier and assumes that the variates are obtained from a multivariate normal distribution and that the groups have similar covariance matrices. If these conditions do not hold then the linear discriminant function should be used with care and the results obtained subject to careful analysis. [Pg.138]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes. [Pg.142]

Although the layout in Figure 13 correctly classifies the data, by applying two linear discriminating functions to the pattern space, it is unable to learn from a training set and must be fully programmed before use, i.e. it must be manually set-up before being employed. This situation arises because the... [Pg.149]

Least squares models, 39, 158 Linear combination, normalized, 65 Linear combination of variables, 64 Linear discriminant analysis, 134 Linear discriminant function, 132 Linear interpolation, 47 Linear regression, 156 Loadings, factor, 74 Lorentzian distribution, 14... [Pg.215]


See other pages where Linear discriminant functions is mentioned: [Pg.424]    [Pg.110]    [Pg.53]    [Pg.414]    [Pg.327]    [Pg.268]    [Pg.48]    [Pg.196]    [Pg.760]    [Pg.53]    [Pg.236]    [Pg.240]    [Pg.240]    [Pg.212]    [Pg.276]    [Pg.167]    [Pg.65]    [Pg.330]    [Pg.118]    [Pg.119]    [Pg.132]    [Pg.479]    [Pg.54]    [Pg.91]    [Pg.101]    [Pg.101]   
See also in sourсe #XX -- [ Pg.6 ]




SEARCH



Discriminate function

Discriminative functionalization

Linear discriminant function analysis

Linear functional

Linear functionals

Linear functions

© 2024 chempedia.info