Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Fishers Discriminant Analysis

Figure 5.6 visualizes the idea of Fisher discriminant analysis for two groups in the two-dimensional case. The group centers (filled symbols) are projected on the discriminant variable, giving yi and y2 the mean of both is the classification threshold y0. [Pg.216]

Most traditional approaches to classification in science are called discriminant analysis and are often also called forms of hard modelling . The majority of statistically based software packages such as SAS, BMDP and SPSS contain substantial numbers of procedures, referred to by various names such as linear (or Fisher) discriminant analysis and canonical variates analysis. There is a substantial statistical literature in this area. [Pg.233]

T Van Gestel, J Suykens, G Lanckriet, A Lambrechts, B De Moor, and J Vandewalle. Bayesian framework for least squares support vector machine classifiers, Gaussian processes, and kernel Fisher discriminant analysis. Neural Computation, 15 1115-1148, 2002. [Pg.300]

Supervised and unsupervised classification for example PCA, K-means and fuzzy clustering, linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA), fisher discriminant analysis (FDA), artificial neural networks (ANN). [Pg.361]

LS-SVMlab, http //www.esat.kuleuven.ac.be/sista/lssvmlab/. LS-SVMlab, by Suykens, is a MATLAB implementation of least-squares support vector machines (LS-SVMs), a reformulation of the standard SVM that leads to solving linear KKT systems. LS-SVM primal-dual formulations have been formulated for kernel PCA, kernel CCA, and kernel PLS, thereby extending the class of primal-dual kernel machines. Links between kernel versions of classic pattern recognition algorithms such as kernel Fisher discriminant analysis and extensions to unsupervised learning, recurrent networks, and control are available. [Pg.390]

Kennedy JW, Kaiser GC, Fisher LD, et al. Multivariate discriminant analysis of the clinical and angiographic predictors of operative mortality form the collaborative study in coronary artery surgery (CASS). J Thorac Car-diovasc Surg 1980 80 876-887. [Pg.84]

Figure 15-5 Fisher s linear discriminant analysis determines the line, plane, or hyperplane that best separates two populations, based on their mean values and the variance, in the graph, a line bisecting the two curves In the lower left corner provides this best discriminator. Figure 15-5 Fisher s linear discriminant analysis determines the line, plane, or hyperplane that best separates two populations, based on their mean values and the variance, in the graph, a line bisecting the two curves In the lower left corner provides this best discriminator.
Fisher suggested to transform the multivariate observations x to another coordinate system that enhances the separation of the samples belonging to each class tt [74]. Fisher s discriminant analysis (FDA) is optimal in terms of maximizing the separation among the set of classes. Suppose that there is a set of n = ni + U2 + + rig) m-dimensional (number of process variables) samples xi, , x belonging to classes tt, i = 1, , g. The total scatter of data points (St) consists of two types of scatter, within-class scatter Sw and hetween-class scatter Sb- The objective of the transformation proposed by Fisher is to maximize S while minimizing Sw Fisher s approach does not require that the populations have Normal distributions, but it implicitly assumes that the population covariance matrices are equal, because a pooled estimate of the common covariance matrix (S ) is used (Eq. 3.45). [Pg.53]

If one does not wish to bias the boundaries of the NO region of a system, kernel density estimation (KDE) can be used to find the contours underneath the joint probability density of the PC pair, starting from the one that captures most of the information. Below, a brief review of KDE is presented first that will be used as part of the robust monitoring technique discussed in Section 7.7. Then, the use of kernel-based methods for formulating nonlinear Fisher s discriminant analysis (FDA) is discussed. [Pg.64]

LH Chiang, ME Kotanchek, and AK Kordon. Fault diagnosis based on Fisher s discriminant analysis and support vector machines. Corn-put Chem. Engg., 28(8) 1389-1401, 2004. [Pg.280]

Discriminant plots were obtained for the adaptive wavelet coefficients which produced the results in Table 2. Although the classifier used in the AWA was BLDA, it was decided to supply the coefficients available upon termination of the AWA to Fisher s linear discriminant analysis, so we could visualize the spatial separation between the classes. The discriminant plots are produced using the testing data only. There is a good deal of separation for the seagrass data (Fig. 5), while for the paraxylene data (Fig. 6) there is some overlap between the objects of class I and 3. Quite clearly, the butanol data (Fig. 7) post a challenge in discriminating between the two classes. [Pg.447]

Fig. 5 Discriminant plots for the seagrass data produced by supplying the coefficients resulting from the AW A to Fisher s linear discriminant analysis. Fig. 5 Discriminant plots for the seagrass data produced by supplying the coefficients resulting from the AW A to Fisher s linear discriminant analysis.
A more formal way of finding a decision boundary between different classes is based on linear discriminant analysis (LDA) as introduced by Fisher and Mahalanobis. The boundary or hyperplane is calculated such that the variance between the classes is maximized and the variance within the individual classes is minimized. There are several ways to arrive at the decision hyperplanes. In... [Pg.186]

There are a number of classification methods for analyzing data, including artificial neural (ANNs see Beale and Jackson, 1990) networks, -nearest-neighbor (fe-NN) methods, decision trees, support vector machines (SVMs), and Fisher s linear discriminant analysis (LDA). Among these methods, a decision tree is a flow-chart-like tree stmcture. An intermediate node denotes a test on a predictive attribute, and a branch represents an outcome of the test. A terminal node denotes class distribution. [Pg.129]


See other pages where Fishers Discriminant Analysis is mentioned: [Pg.213]    [Pg.214]    [Pg.216]    [Pg.216]    [Pg.582]    [Pg.186]    [Pg.191]    [Pg.279]    [Pg.213]    [Pg.214]    [Pg.216]    [Pg.216]    [Pg.582]    [Pg.186]    [Pg.191]    [Pg.279]    [Pg.330]    [Pg.196]    [Pg.208]    [Pg.168]    [Pg.478]    [Pg.92]    [Pg.416]    [Pg.417]    [Pg.417]    [Pg.418]    [Pg.418]    [Pg.575]    [Pg.48]    [Pg.53]    [Pg.69]    [Pg.337]    [Pg.439]    [Pg.714]    [Pg.143]    [Pg.143]   
See also in sourсe #XX -- [ Pg.233 ]




SEARCH



Discriminant analysis

Discriminate analysis

Fisher 1

Fisher linear discriminant analysis

Fisher’s discriminant analysis

© 2024 chempedia.info