Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parametric classification

Friedman, J. A. (1977) Recursive partitioning decision rules for non-parametric classification. IEEE Trans. Comput. 26, 404-408. [Pg.299]

Quadratic discriminant analysis (QDA) is a probabilistic parametric classification technique which represents an evolution of EDA for nonlinear class separations. Also QDA, like EDA, is based on the hypothesis that the probability density distributions are multivariate normal but, in this case, the dispersion is not the same for all of the categories. It follows that the categories differ for the position of their centroid and also for the variance-covariance matrix (different location and dispersion), as it is represented in Fig. 2.16A. Consequently, the ellipses of different categories differ not only for their position in the plane but also for eccentricity and axis orientation (Geisser, 1964). By coimecting the intersection points of each couple of corresponding ellipses (at the same Mahalanobis distance from the respective centroids), a parabolic delimiter is identified (see Fig. 2.16B). The name quadratic discriminant analysis is derived from this feature. [Pg.88]

In Table 1 the main differences between the parametric classification-modelling techniques are summarized. The large number of techniques suggested in the last few years and their use in the classification of food samples display the interest in... [Pg.129]

Table 1. Comparison of the parametric classification-modelling techniques... Table 1. Comparison of the parametric classification-modelling techniques...
A non-parametric classification technique was recently proposed by Schatzki and Vandercook (172) to evaluate orange juice. The system considers the total sugars, reactive phenols, total amino acids, arginine and y -aminobutyric acid. With the para-... [Pg.414]

The Bayes rule simply states that a sample or object should be assigned to that group having the highest conditional probability and application of this rule to parametric classification schemes provides optimum discriminating capability. An explanation of the term conditional probability is perhaps in order here. [Pg.127]

Theory. SIMCA is a parametric classification method introduced by Wold (29), which supposes that the objects of a given class are normally distributed. The particularity of this PCA-based method is that one model is built for each class separately, that is, disjoint class modeling is performed. The algorithm starts by determining the optimal number of PCs for each individual model with CV. The resulting PCs are then used to define a hypervolume for each class. The boundary around one group of objects is then the confidence limit for the residuals of all objects determined by a statistical T-test (30, 31). The direction of the PCs and the limits established for these PCs define the model of a class (Fig. 13.13). [Pg.312]

PCA is not only used as a method on its own but also as part of other mathematical techniques such as SIMCA classification (see section on parametric classification methods), principal component regression analysis (PCRA) and partial least-squares modelling with latent variables (PLS). Instead of original descriptor variables (x-variables), PCs extracted from a matrix of x-variables (descriptor matrix X) are used in PCRA and PLS as independent variables in a regression model. These PCs are called latent variables in this context. [Pg.61]

The probability density functions cannot be stored point by point because they depend on many (d) variables. Therefore several parametric classification methods assume Gaussian distributions and the estimated parameters of these distributions are used to specify a decision function. Another assumption of parametric classifiers are statistically independent pattern features. [Pg.78]

A statistical meaningful estimation of probability densities requires very large data sets. Therefore, chemical applications of parametric classification methods always include assumptions which are often not fulfilled or cannot be proved. A severe assumption is the statistical independence of the pattern components which is certainly often not satisfied. Generation of new independent features is usually too laborious (Chapter 10). [Pg.87]

The Bayesian approach is one of the probabilistic central parametric classification methods it is based on the consistent apphcation of the classic Bayes equation (also known as the naive Bayes classifier ) for conditional probabihty [34] to constmct a decision rule a modified algorithm is explained in references [105, 109, 121]. In this approach, a chemical compound C, which can be specified by a set of probability features (Cj,...,c ) whose random values are distributed through all classes of objects, is the object of recognition. The features are interpreted as independent random variables of an /w-dimensional random variable. The classification metric is an a posteriori probability that the object in question belongs to class k. Compound C is assigned to the class where the probability of membership is the highest. [Pg.384]

Classification by status Abandoned well Suspended well Observation well Injection well Oil producer Gas producer Exhausted producer Production well Observation well Injection well Abandoned well Suspended well Geological research well - oil - gas Production well (active) - oil - gas Production well (inactive) Parametric well Injection well Abandoned well - dry - exhausted - technical reasons Well being drilled Abandoned well Suspended well Oil producing well Gas producing well Gas and oil producing well Injection well Well under observation Producing well depleted... [Pg.29]

Classification by objective Exploration Exploration OOOO well Appraisal well OO Delineation well Q New-structure test Q New-pool test Q Deeper-pool test Q Shallower-pool test Q New-licence block QQ appraisal test Fault block O extension test Appraisal/outpost 0 test (delineation) Stratigraphic test O0 New-field wildcat 0 New-pool wildcat 0 Deeper-pool test 0 Shallower-pool test 0 Outpost or 00 extension test Key well 00 - Group one - Group two Parametric well O0 Core well 00 Prospecting well 00 Exploratory well 00 Exploration O000 well Appraisal well 0O... [Pg.30]

Classification selon I etat Puits abandonne Puits arrets Puits d observation Puits d injection Puits producteur de petrole Puits producteur de gaz Puits producteur epuise Puits de production Puits d observation Puits d injection Puits abandonne Puits arrets Puits de recherche geologique - petrole -gaz Puits de production (en activite) - petrole -gaz Ihiits de production (inexploit ) Puits d etude des parametres Puits d injection Puits abandonne - sec - epuise - technique Puits en cours de forage Puits abandonne Puits arrete Puits producteur de petrole Puits de gaz Puits de gaz et de petrole Puits d injection Puits d observation Puits productif epuise... [Pg.60]

LDA is the first classification technique introduced into multivariate analysis by Fisher (1936). It is a probabilistic parametric technique, that is, it is based on the estimation of multivariate probability density fimc-tions, which are entirely described by a minimum number of parameters means, variances, and covariances, like in the case of the well-knovm univariate normal distribution. LDA is based on the hypotheses that the probability density distributions are multivariate normal and that the dispersion is the same for all the categories. This means that the variance-covariance matrix is the same for all of the categories, while the centroids are different (different location). In the case of two variables, the probability density fimction is bell-shaped and its elliptic section lines correspond to equal probability density values and to the same Mahala-nobis distance from the centroid (see Fig. 2.15A). [Pg.86]

The aim of supervised classification is to create rules based on a set of training samples belonging to a priori known classes. Then the resulting rules are used to classify new samples in none, one, or several of the classes. Supervised pattern recognition methods can be classified as parametric or nonparametric and linear or nonlinear. The term parametric means that the method makes an assumption about the distribution of the data, for instance, a Gaussian distribution. Frequently used parametric methods are EDA, QDA, PLSDA, and SIMCA. On the contrary, kNN and CART make no assumption about the distribution of the data, so these procedures are considered as nonparametric. Another distinction between the classification techniques concerns the... [Pg.303]

Theory. LDA, a popular method for supervised classification, was introduced by Fisher in 1936 (21). The goal of this method is to classify the samples, establishing a linear function based on the variables X (i ranges from 1 to n, the number of considered variables), which separates the classes existing in the training set (Fig. 13.8). Classification is based on the interclass discrimination (22). It is a parametric method because the method assumes that the distribution of the samples in the classes is Gaussian. [Pg.304]

QDA is identical to LDA, but this method is based on a quadratic classification curve instead of a straight line. The data must be normally distributed as for the LDA method. QDA is thus a linear parametric method. [Pg.305]


See other pages where Parametric classification is mentioned: [Pg.65]    [Pg.134]    [Pg.66]    [Pg.71]    [Pg.78]    [Pg.65]    [Pg.134]    [Pg.66]    [Pg.71]    [Pg.78]    [Pg.118]    [Pg.27]    [Pg.28]    [Pg.213]    [Pg.65]    [Pg.165]    [Pg.419]    [Pg.424]    [Pg.148]    [Pg.317]    [Pg.123]    [Pg.279]    [Pg.217]    [Pg.5]    [Pg.2249]    [Pg.108]    [Pg.217]    [Pg.219]    [Pg.64]    [Pg.285]    [Pg.178]    [Pg.74]    [Pg.61]   


SEARCH



Parametric

Parametrization

© 2024 chempedia.info