Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Discriminant regularized

J.H. Friedman, Regularized discriminant analysis. J. Am. Stat. Assoc, 84 (1989) 165-175. [Pg.239]

W. Wu, Y. Mallet, B. Walczak, W. Penninckx, D.L. Massart, S. Heuerding and F. Erni, Comparison of regularized discriminant analysis, linear discriminant analysis and quadratic discriminant analysis, applied to NIR data. Anal. Chim. Acta, 329 (1996) 257-265. [Pg.240]

Discriminant analysis (DA) performs samples classification with an a priori hypothesis. This hypothesis is based on a previously determined TCA or other CA protocols. DA is also called "discriminant function analysis" and its natural extension is called MDA (multiple discriminant analysis), which sometimes is named "discriminant factor analysis" or CD A (canonical discriminant analysis). Among these type of analyses, linear discriminant analysis (LDA) has been largely used to enforce differences among samples classes. Another classification method is known as QDA (quadratic discriminant analysis) (Frank and Friedman, 1989) an extension of LDA and RDA (regularized discriminant analysis), which works better with various class distribution and in the case of high-dimensional data, being a compromise between LDA and QDA (Friedman, 1989). [Pg.94]

With regular cleansers, a procedure like FCAT (Forearm Controlled Application Test) provides good sensitivity to varying discriminate products based on their drying potential. Looking at soap versus syndet bar, we can compare three clear trends in Figure 31.13 an increase in the visible appearance of dry skin over time, a concomitant decrease in the equilibrium hydration state of the skin, and an increase in the disruption to the moisture barrier evidenced as an increase in TEWL. In all the three measures, the syndet is seen as milder and less drying. [Pg.422]

Friedman and Frank [75] have shown that SIMCA is similar in form to quadratic discriminant analysis. The maximum-likelihood estimate of the inverse of the covariance matrix, which conveys information about the size, shape, and orientation of the data cloud for each class, is replaced by a principal component estimate. Because of the success of SIMCA, statisticians have recently investigated methods other than maximum likelihood to estimate the inverse of the covariance matrix, e.g., regularized discriminant analysis [76], For this reason, SIMCA is often viewed as the first successful attempt by scientists to develop robust procedures for carrying out statistical discriminant analysis on data sets where maximum-likelihood estimates fail because there are more features than samples in the data set. [Pg.354]

Koene, R. A. Takane, Y. (1999). Discriminant component pruning. Regularization and interpretation of multi-layered back-propagation networks. Neural Comput 11,783-802. [Pg.150]

Active sites in zeolites are located in a regular network of channels of uniform dimensions, whose shapes and sizes are characteristic of the individual zeolite. The diameter of the pore ports are of similar magnitude to the molecular dimensions of smaller organic compounds. Shape selectivity is the first and best known consequence of this. It can be defined as the capacity of zeolites to discriminate reactants, products and intermediates on the basis of their sizes and shapes (see also p 77). [Pg.278]

Therefore, CPCA uses exactly the same objective function as PCA it tries to best explain the overall variance of the X matrix, but the analysis is made on two levels the block level, which considers each of the probes, and the super level, which expresses the consensus of all blocks. CPCA provides a solution on the super level that is identical to a solution found in regular PCA, i.e., the same T and P matrices are obtained. Additionally, the method produces block scores 4 and block loadings for each of the probes and a weight matrix which gives the contribution of each block to the overall scores. The block scores represent a particular point of view of the model given by a certain probe and provide unique information not present in regular PCA. Object distances in the block scores can be used to assess the relative importance of the different probes in their discrimination. [Pg.59]

The most popular classification methods are Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Regularized Discriminant Analysis (RDA), K th Nearest Neighbours (KNN), classification tree methods (such as CART), Soft-Independent Modeling of Class Analogy (SIMCA), potential function classifiers (PFC), Nearest Mean Classifier (NMC) and Weighted Nearest Mean Classifier (WNMC). Moreover, several classification methods can be found among the artificial neural networks. [Pg.60]


See other pages where Discriminant regularized is mentioned: [Pg.127]    [Pg.130]    [Pg.365]    [Pg.253]    [Pg.109]    [Pg.220]    [Pg.380]    [Pg.385]    [Pg.249]    [Pg.248]    [Pg.201]    [Pg.344]    [Pg.27]    [Pg.11]    [Pg.348]    [Pg.325]    [Pg.541]    [Pg.228]    [Pg.48]    [Pg.1694]    [Pg.54]    [Pg.160]    [Pg.189]    [Pg.94]    [Pg.17]    [Pg.465]    [Pg.266]    [Pg.173]    [Pg.207]    [Pg.225]    [Pg.80]    [Pg.315]    [Pg.172]    [Pg.249]    [Pg.114]    [Pg.36]    [Pg.3099]    [Pg.538]    [Pg.1064]    [Pg.83]    [Pg.84]    [Pg.180]   
See also in sourсe #XX -- [ Pg.192 ]




SEARCH



© 2024 chempedia.info