Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear discriminant score

Maximizing the posterior probabilities in case of multivariate normal densities will result in quadratic or linear discriminant rules. However, the mles are linear if we use the additional assumption that the covariance matrices of all groups are equal, i.e., X = = Xk=X- In this case, the classification rule is based on linear discriminant scores dj for groups j... [Pg.212]

FIGURE 5.4 Linear discriminant scores dj for group j by the Bayesian classification rule based on (Equation 5.2). mj, mean vector of all objects in group j Sp1, inverse of the pooled covariance matrix (Equation 5.3) x, object vector (to be classified) defined by m variables Pj, prior probability of group j. [Pg.214]

Fig. 13.3. Scatter plot of linear discriminant scores for each spectrum in the biopsy targeting model when tested against all the others, colour coded for consensus pathology... Fig. 13.3. Scatter plot of linear discriminant scores for each spectrum in the biopsy targeting model when tested against all the others, colour coded for consensus pathology...
Since the second and third terms are independent of i, they are the same for all dj (x) and can be ignored in classification. Since the remaining terms consist of a constant for each i (Inp — f/2/LtJ S/Xj) and a linear combination of the components of x, a linear discriminant score is defined as... [Pg.53]

Initially an optimised model was constructed using the data collected as outlined above by constructing a principal component (PC)-fed linear discriminant analysis (LDA) model (described elsewhere) [7, 89], The linear discriminant function was calculated for maximal group separation and each individual spectral measurement was projected onto the model (using leave-one-out cross-validation) to obtain a score. The scores for each individual spectrum projected onto the model and colour coded for consensus pathology are shown in Fig. 13.3. The simulation experiments used this optimised model as a baseline to compare performance of models with spectral perturbations applied to them. The optimised model training performance achieved 93% accuracy overall for the three groups. [Pg.324]

Partial least square (PLS) regression model describes the dependences between two variables blocks, e.g. sensor responses and time variables. Let the X matrix represent the sensor responses and the Y matrix represent time, the X and Y matrices could be approximated to few orthogonal score vectors, respectively. These components are then rotated in order to get as good a prediction of y variables as possible [25], Linear discriminant analysis (LDA) is among the most used classification techniques. The method maximises the variance between... [Pg.759]

In supervised pattern recognition, a major aim is to define the distance of an object from the centre of a class. There are two principle uses of statistical distances. The first is to obtain a measurement analogous to a score, often called the linear discriminant function, first proposed by the statistician R A Fisher. This differs from the distance above in that it is a single number if there are only two classes. It is analogous to the distance along line 2 in Figure 4.26, but defined by... [Pg.237]

TaUe 4 Discriminant scores using the linear discriminant function as classifier (a), and the resulting confusion matrix (b)... [Pg.136]

The adaptive least squares (ALS) method [396, 585 — 588] is a modification of discriminant analysis which separates several activity classes e.g. data ordered by a rating score) by a single discriminant function. The method has been compared with ordinary regression analysis, linear discriminant analysis, and other multivariate statistical approaches in most cases the ALS approach was found to be superior to categorize any numbers of classes of ordered data. ORMUCS (ordered multicate-gorial classification using simplex technique) [589] is an ALS-related approach which... [Pg.100]

The procedure is continued until all discriminant functions are found for solving the discrimination problem. By plotting pairs of discriminating functions against each other, the best separation of objects into groups after the linear transformation of the initial features can be visualized (cf. Figure 5.25). The projection of a particular object onto the separating line or hyperplane is called its score on the linear discriminant function. [Pg.189]

Two studies have suggested that the IR spectra of synovial fluid specimens provide the basis to diagnose arthritis and to differentiate among its variants.A NIR study demonstrated that osteoarthritis, rheumatoid arthritis, and spondyloarthropathy could be distinguished on the basis of the synovial fluid absorption patterns in the range 2000-2400 nm.< In that case, the pool of synovial fluid spectra was subject to principal component analysis, and eight principal component scores for each spectrum were employed as the basis for linear discriminant analysis (LDA). On that basis, the optimal LDA classifier matched 105 of the 109 spectra to the correct clinical designation (see Table 7). [Pg.17]

The value of the discriminant score is calculated from a linear combination of the recorded values of the variables describing the objects, each suitably weighted to provide optimum discriminatory power. For two variables... [Pg.587]

The discriminant function is linear all the terms are added together to give a single number, the discriminant score. [Pg.587]

Figure 10.13 Principal components analysis scores plots (a) using all three example variables (first two principal components discriminations of the varieties, particularly sample Le2, would be very difficult) (b) using the best two variables, unweighted w selected [equation (10.31) in text] (discrimination of the varieties is now possible using only the first principal component) (c) discarding the best variable, unweighted w selected (linear discrimination of the varieties would not appear to be possible from this chart). Details of the variables are given in Table 10.3. Figure 10.13 Principal components analysis scores plots (a) using all three example variables (first two principal components discriminations of the varieties, particularly sample Le2, would be very difficult) (b) using the best two variables, unweighted w selected [equation (10.31) in text] (discrimination of the varieties is now possible using only the first principal component) (c) discarding the best variable, unweighted w selected (linear discrimination of the varieties would not appear to be possible from this chart). Details of the variables are given in Table 10.3.
The diagnosis of colorectal cancer has been the focus of several studies [232-234]. Researchers used the first and second overtone C-H stretching to discriminate between cancer and normal tissue. The use of linear discriminant analysis, artificial neural networks, and clustering analysis was compared and very similar results obtained. While the former results were performed on resected samples, Shao et al. implemented an endoscope-based detection method to identify in vivo hyperplastic and adenomatous polyps [235]. Using a simple linear discriminant analysis based on principal component analysis scores, very good diagnostic sensitivities and specificities were obtained. [Pg.137]

From the probability density distributions of the discriminant scores u for class A and B an optimal decision threshold Mo can be determined, If an unknown has to be classified it is assigned to class A if its discriminant variable is lower than Mo, Otherwise to class B, In the case of a substantial overlap of the probability density distributions an interval may be defined for a rejection of classifications. The decision vector (a linear combination of the features) together with the rule how to assign the classes is called a classifier. [Pg.353]


See other pages where Linear discriminant score is mentioned: [Pg.331]    [Pg.179]    [Pg.331]    [Pg.179]    [Pg.211]    [Pg.213]    [Pg.375]    [Pg.48]    [Pg.33]    [Pg.212]    [Pg.249]    [Pg.58]    [Pg.101]    [Pg.713]    [Pg.163]    [Pg.518]    [Pg.169]    [Pg.54]    [Pg.162]    [Pg.153]    [Pg.188]    [Pg.155]    [Pg.54]    [Pg.309]    [Pg.462]    [Pg.10]    [Pg.671]    [Pg.61]    [Pg.269]    [Pg.428]    [Pg.291]    [Pg.55]    [Pg.141]    [Pg.554]   


SEARCH



Discriminant score

© 2024 chempedia.info