Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear discriminant classifier

Discriminant classifiers. The two most important discriminant classifiers for material analysis using spectroscopic imaging systems are the Fisher linear discriminant classifier (FLDC) and the quadratic discriminant classifier (QDC). Other classfiers, such as the classical linear disriminant classifier (LDC), have frequently exhibited an inferior performance. [Pg.166]

Woodruff and co-workers introduced the expert system PAIRS [67], a program that is able to analyze IR spectra in the same manner as a spectroscopist would. Chalmers and co-workers [68] used an approach for automated interpretation of Fourier Transform Raman spectra of complex polymers. Andreev and Argirov developed the expert system EXPIRS [69] for the interpretation of IR spectra. EXPIRS provides a hierarchical organization of the characteristic groups that are recognized by peak detection in discrete ames. Penchev et al. [70] recently introduced a computer system that performs searches in spectral libraries and systematic analysis of mixture spectra. It is able to classify IR spectra with the aid of linear discriminant analysis, artificial neural networks, and the method of fe-nearest neighbors. [Pg.530]

Most of the supervised pattern recognition procedures permit the carrying out of stepwise selection, i.e. the selection first of the most important feature, then, of the second most important, etc. One way to do this is by prediction using e.g. cross-validation (see next section), i.e. we first select the variable that best classifies objects of known classification but that are not part of the training set, then the variable that most improves the classification already obtained with the first selected variable, etc. The results for the linear discriminant analysis of the EU/HYPER classification of Section 33.2.1 is that with all 5 or 4 variables a selectivity of 91.4% is obtained and for 3 or 2 variables 88.6% [2] as a measure of classification success. Selectivity is used here. It is applied in the sense of Chapter... [Pg.236]

FIGURE 5.4 Linear discriminant scores dj for group j by the Bayesian classification rule based on (Equation 5.2). mj, mean vector of all objects in group j Sp1, inverse of the pooled covariance matrix (Equation 5.3) x, object vector (to be classified) defined by m variables Pj, prior probability of group j. [Pg.214]

S.J. Dixon and R.G. Brereton, Comparison of performance of five common classifiers represented as boundary methods Euclidean distance to centroids, linear discriminant analysis, quadratic discriminant analysis, learning vector quantization and support vector machines, as dependent on data structure, Chemom. Intell. Lab. Syst, 95, 1-17 (2009). [Pg.437]

Many types of classifiers are based on linear discriminants of the form shown in (1). They differ with regard to how the weights are determined. The oldest form of linear discriminant is Fisher s linear discriminant. To compute the weights for the Fisher linear discriminant, one must estimate the correlation between all pairs of genes that were selected in the feature selection step. The study by Dudoit et al. indicated that Fisher s linear discriminant did not perform well unless the number of selected genes was small relative to the number of samples. The reason is that in other cases there are too many correlations to estimate and the method tends to be unstable and over-fit the data. [Pg.330]

Na and K) employing different pattern recognition techniques [PCA, linear discriminant analysis (LDA) and ANNs], ANNs were trained by an error back-propagation algorithm and they were found to be very efficient for classifying and discriminating food products. [Pg.273]

Linear discriminant analysis (LDA) is also a probabilistic classifier in the mold of Bayes algorithms but can be related closely to both regression and PCA techniques. A discriminant function is simply a function of the observed vector of variables (K) that leads to a classification rule. The likelihood ratio (above), for example, is an optimal discriminant for the two-class case. Hence, the classification rule can be stated as... [Pg.196]

The calculation of the linear discriminant function is presented in Table 4.25 and the values are plotted in Figure 4.30. It can be seen that objects 5, 6, 12 and 18 are not easy to classify. The centroids of each class in this new dataset using the linear discriminant function can be calculated, and the distance from these values could be calculated however, this would result in a diagram comparable to Figure 4.27, missing information obtained by taking two measurements. [Pg.239]

Calculate the centroids for each class, and hence the linear discriminant function given by (xA — xB).CAlB.x j for each object i. Represent this graphically. Suggest a cut-off value of dtis function which will discriminate most of the compounds. What is the percentage correctly classified ... [Pg.265]

The authors also combined metabolomics results with results obtained from AFP determinations. The model was created using linear discriminant analysis. Principal component analysis was also carried out. Thanks to the created model, it was possible to detect metabolites of potential diagnostic value. Moreover, analysis of metabolomic profiles decreased the number of patients that were incorrectly classified with the use of AFP marker [21]. [Pg.251]

The most popular classification methods are Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Regularized Discriminant Analysis (RDA), K th Nearest Neighbours (KNN), classification tree methods (such as CART), Soft-Independent Modeling of Class Analogy (SIMCA), potential function classifiers (PFC), Nearest Mean Classifier (NMC) and Weighted Nearest Mean Classifier (WNMC). Moreover, several classification methods can be found among the artificial neural networks. [Pg.60]

A linear discriminant function can be found using a linear programming approach (48,49). The objective function to be optimized consists of the fraction of the training set correctly classified. If two vertices have the same classification ability, then the vertex with the smaller sum of distances to misclassified points is taken as better. [Pg.119]

Another routine develops a decision tree of binary choices which, taken as a whole, can classify the members of the training set. The decision tree generated implements a piecewise linear discriminant function. The decision tree is developed by splitting the data set into two parts in an optimal way at each node. A node is considered to be terminal when no advantageous split can be made a terminal node is labelled to associate with the pattern class most represented among the patterns present at the node. [Pg.119]

A further simplification can-be made to the Bayes classifier if the covariance matrices for both groups are known to be or assumed to be similar. This condition implies that the correlations between variables are independent of the group to which the objects belong. Extreme examples are illustrated in Figure 5. In such cases the groups are linearly separable and a linear discriminant function can be evaluated. [Pg.132]

TaUe 4 Discriminant scores using the linear discriminant function as classifier (a), and the resulting confusion matrix (b)... [Pg.136]


See other pages where Linear discriminant classifier is mentioned: [Pg.362]    [Pg.419]    [Pg.362]    [Pg.419]    [Pg.511]    [Pg.424]    [Pg.148]    [Pg.110]    [Pg.97]    [Pg.207]    [Pg.61]    [Pg.455]    [Pg.160]    [Pg.382]    [Pg.268]    [Pg.125]    [Pg.132]    [Pg.362]    [Pg.331]    [Pg.196]    [Pg.723]    [Pg.33]    [Pg.213]    [Pg.478]    [Pg.478]    [Pg.30]    [Pg.139]    [Pg.701]    [Pg.79]    [Pg.80]    [Pg.118]    [Pg.119]    [Pg.306]    [Pg.308]    [Pg.417]    [Pg.418]   


SEARCH



Classified

Classifier

Classifier discriminant

Classifying

Linear classifiers

© 2024 chempedia.info