Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear discriminant function analysis

Another parametric routine implements a discriminant function by the method commonly called linear discriminant function analysis. It is nearly identical to the linear Bayesian discriminant, except that instead of using the covariance matrix, the sum of cross-products matrix is used. Results obtained with the routine are ordinarily very similar to those obtained using the linear Bayes routine. The routine implemented as LDFA is a highly modified version of program BMD04M taken from the Biomedical Computer Programs Package (47). [Pg.118]

Discriminant analysis (DA) performs samples classification with an a priori hypothesis. This hypothesis is based on a previously determined TCA or other CA protocols. DA is also called "discriminant function analysis" and its natural extension is called MDA (multiple discriminant analysis), which sometimes is named "discriminant factor analysis" or CD A (canonical discriminant analysis). Among these type of analyses, linear discriminant analysis (LDA) has been largely used to enforce differences among samples classes. Another classification method is known as QDA (quadratic discriminant analysis) (Frank and Friedman, 1989) an extension of LDA and RDA (regularized discriminant analysis), which works better with various class distribution and in the case of high-dimensional data, being a compromise between LDA and QDA (Friedman, 1989). [Pg.94]

Initially an optimised model was constructed using the data collected as outlined above by constructing a principal component (PC)-fed linear discriminant analysis (LDA) model (described elsewhere) [7, 89], The linear discriminant function was calculated for maximal group separation and each individual spectral measurement was projected onto the model (using leave-one-out cross-validation) to obtain a score. The scores for each individual spectrum projected onto the model and colour coded for consensus pathology are shown in Fig. 13.3. The simulation experiments used this optimised model as a baseline to compare performance of models with spectral perturbations applied to them. The optimised model training performance achieved 93% accuracy overall for the three groups. [Pg.324]

Chemometrics is a branch of science and technology dealing with the extraction of useful information from multidimensional measurement data using statistics and mathematics. It is applied in numerous scientific disciplines, including the analysis of food [313-315]. The most common techniques applied to multidimensional analysis include principal components analysis (PCA), factor analysis (FA), linear discriminant analysis (LDA), canonical discriminant function analysis (DA), cluster analysis (CA) and artificial neurone networks (ANN). [Pg.220]

Linear discriminant analysis (LDA) [41] separates two data classes of feature vectors by constructing a hyperplane defined by a linear discriminant function ... [Pg.222]

The linear discriminant function is a most commonly used classification technique and it is available with all the most popular statistical software packages. It should be borne in mind, however, that it is only a simplification of the Bayes classifier and assumes that the variates are obtained from a multivariate normal distribution and that the groups have similar covariance matrices. If these conditions do not hold then the linear discriminant function should be used with care and the results obtained subject to careful analysis. [Pg.138]

Least squares models, 39, 158 Linear combination, normalized, 65 Linear combination of variables, 64 Linear discriminant analysis, 134 Linear discriminant function, 132 Linear interpolation, 47 Linear regression, 156 Loadings, factor, 74 Lorentzian distribution, 14... [Pg.215]

Another classification method is the discriminant function analysis, which instead of regression as the mathematical framework is based on the same principle as the MANOVA. Whereas the MANOVA deals with whether a number of groups differ significantly with respect to differences on a number of dependent variables, the discriminant function analysis deals with whether a linear combination of predictor... [Pg.384]

The starting point of linear discriminant analysis (LDA) is to find a linear discriminant function (LDF), Y, which is a linear combination of the original variables X, X2, etc. ... [Pg.223]

The statistical theory of discriminant analysis also defines a linear discriminant function very similarly to Mahalanobis distance. These functions have characteristics that are of interest to NIR spectroscopists the linear discriminant functions are similar in form to regression equations, so that. [Pg.314]

However, the current implementations of the discriminant analysis approach to qualitative analysis via NIR spectroscopy use Mahalanobis distances rather than linear discriminant functions because of their other characteristic Linear discriminant functions do not permit as straightforward a means of detecting outliers and samples not in the training set as Mahalanobis distances do, in addition to other advantages that will be considered further on. [Pg.315]

Statistical pattern recognition is based on the statistical nature of signals and extracted features are represented as probability density functions (Schalkoff, 1992). It therefore requires knowledge of a priori probabilities of measurements and features. Statistical approaches include linear discriminant functions, Bayesian functions and cluster analysis and may be unsupervised or supervised. Supervised classifiers require a set of exemplars for each class to be recognized they are used to train the system. Unsupervised learning, on the other hand, does not require an exemplar set. [Pg.90]

Discriminant emalysis is a supervised learning technique which uses classified dependent data. Here, the dependent data (y values) are not on a continuous scale but are divided into distinct classes. There are often just two classes (e.g. active/inactive soluble/not soluble yes/no), but more than two is also possible (e.g. high/medium/low 1/2/3/4). The simplest situation involves two variables and two classes, and the aim is to find a straight line that best separates the data into its classes (Figure 12.37). With more than two variables, the line becomes a hyperplane in the multidimensional variable space. Discriminant analysis is characterised by a discriminant function, which in the particular case of hnear discriminant analysis (the most popular variant) is written as a linear combination of the independent variables ... [Pg.719]

We also make a distinction between parametric and non-parametric techniques. In the parametric techniques such as linear discriminant analysis, UNEQ and SIMCA, statistical parameters of the distribution of the objects are used in the derivation of the decision function (almost always a multivariate normal distribution... [Pg.212]

In the method of linear discriminant analysis, one therefore seeks a linear function of the variables, D, which maximizes the ratio between both variances. Geometrically, this means that we look for a line through the cloud of points, such that the projections of the points of the two groups are separated as much as possible. The approach is comparable to principal components, where one seeks a line that explains best the variation in the data (see Chapter 17). The principal component line and the discriminant function often more or less coincide (as is the case in Fig. 33.8a) but this is not necessarily so, as shown in Fig. 33.8b. [Pg.216]

Linear discriminant analysis (LDA) is also a probabilistic classifier in the mold of Bayes algorithms but can be related closely to both regression and PCA techniques. A discriminant function is simply a function of the observed vector of variables (K) that leads to a classification rule. The likelihood ratio (above), for example, is an optimal discriminant for the two-class case. Hence, the classification rule can be stated as... [Pg.196]

Multivariate statistical analysis using classes of variables and calculating discriminant functions as linear combinations of the variables that maximize the inter-class variance and minimize the intra-class variance. Volume 2(2). [Pg.387]

In discriminant analysis, in a manner similar to factor analysis, new synthetic features have to be created as linear combinations of the original features which should best indicate the differences between the classes, in contrast with the variances within the classes. These new features are called discriminant functions. Discriminant analysis is based on the same matrices B and W as above. The above tested groups or classes of data are modeled with the aim of reclassifying the given objects with a low error risk and of classifying ( discriminating ) another objects using the model functions. [Pg.184]

The result from cluster analysis presented in Fig. 9-2 is subjected to MVDA (for mathematical fundamentals see Section 5.6 or [AHRENS and LAUTER, 1981]). The principle of MVDA is the separation of predicted classes of objects (sampling points). In simultaneous consideration of all the features observed (heavy metal content), the variance of the discriminant functions is maximized between the classes and minimized within them. The classification of new objects into a priori classes or the reclassification of the learning data set is carried out using the values of the discriminant function. These values represent linear combinations of the optimum separation set of the original features. The result of the reclassification is presented as follows ... [Pg.323]

The principle of multivariate analysis of variance and discriminant analysis (MVDA) consists in testing the differences between a priori classes (MANOVA) and their maximum separation by modeling (MDA). The variance between the classes will be maximized and the variance within the classes will be minimized by simultaneous consideration of all observed features. The classification of new objects into the a priori classes, i.e. the reclassification of the learning data set of the objects, takes place according to the values of discriminant functions. These discriminant functions are linear combinations of the optimum set of the original features for class separation. The mathematical fundamentals of the MVDA are explained in Section 5.6. [Pg.332]


See other pages where Linear discriminant function analysis is mentioned: [Pg.327]    [Pg.167]    [Pg.339]    [Pg.327]    [Pg.167]    [Pg.339]    [Pg.268]    [Pg.466]    [Pg.196]    [Pg.317]    [Pg.65]    [Pg.330]    [Pg.91]    [Pg.232]    [Pg.133]    [Pg.385]    [Pg.106]    [Pg.227]    [Pg.369]    [Pg.105]    [Pg.84]    [Pg.140]    [Pg.160]    [Pg.211]    [Pg.132]    [Pg.108]    [Pg.107]    [Pg.115]    [Pg.331]   


SEARCH



Discriminant analysis

Discriminant function analysis

Discriminate analysis

Discriminate function

Discriminative functionalization

Functional analysis

Functions analysis

Linear analysis

Linear discriminant analysis

Linear discriminant function

Linear discriminate analysis

Linear discrimination analysis

Linear functional

Linear functionals

Linear functions

© 2024 chempedia.info