Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

K-nearest neighbours method

In a more sophisticated version of this technique, called the k-nearest neighbour method (k-NN method), one selects the k nearest objects to u and applies a majority rule u is classified in the group to which the majority of the k objects belong. Figure 33.12 gives an example of a 3-NN method. One selects the three nearest neighbours (A, B and C) to the unknown u. Since A and B belong to L, one... [Pg.223]

We haveemployed a variety of unsupervised and supervised pattern recognition methods such as principal component analysis, cluster analysis, k-nearest neighbour method, linear discriminant analysis, and logistic regression analysis, to study such reactivity spaces. We have published a more detailed description of these investigations. As a result of this, functions could be developed that use the values of the chemical effects calculated by the methods mentioned in this paper. These functions allow the calculation of the reactivity of each individual bond of a molecule. [Pg.354]

Successful applications of the K-nearest neighbour method have been reported by... [Pg.153]

A first distinction which is often made is that between methods focusing on discrimination and those that are directed towards modelling classes. Most methods explicitly or implicitly try to find a boundary between classes. Some methods such as linear discriminant analysis (LDA, Sections 33.2.2 and 33.2.3) are designed to find explicit boundaries between classes while the k-nearest neighbours (A -NN, Section 33.2.4) method does this implicitly. Methods such as SIMCA (Section 33.2.7) put the emphasis more on similarity within a class than on discrimination between classes. Such methods are sometimes called disjoint class modelling methods. While the discrimination oriented methods build models based on all the classes concerned in the discrimination, the disjoint class modelling methods model each class separately. [Pg.208]

D. Coomans and D.L. Massart, Alternative K-nearest neighbour rules in supervised pattern recognition. Part 2. Probabilistic classification on the basis of the kNN method modified for direct density estimation. Anal. Chim. Acta, 138 (1982) 153-165. [Pg.240]

The Jarvis-Patrick method involves the use of a list of the top K nearest neighbours for each molecule in a dataset, i.e., the ATmolecules that are most similar to it. Once these lists have been produced for each molecule in the dataset that is to be processed, two molecules are clustered together if they are nearest neighbours of each other and if they additionally have some... [Pg.120]

The discriminant analysis techniques discussed above rely for their effective use on a priori knowledge of the underlying parent distribution function of the variates. In analytical chemistry, the assumption of multivariate normal distribution may not be valid. A wide variety of techniques for pattern recognition not requiring any assumption regarding the distribution of the data have been proposed and employed in analytical spectroscopy. These methods are referred to as non-parametric methods. Most of these schemes are based on attempts to estimate P(x g > and include histogram techniques, kernel estimates and expansion methods. One of the most common techniques is that of K-nearest neighbours. [Pg.138]

The choice of value for k is somewhat empirical and, for overlapping classes, fc = 3 or 5 have been proposed to provide good classification. In general, however, fc = 1 is the most widely used case and is referred to as the 1-NN method or, simply, the nearest-neighbour method. [Pg.140]

B. K. Alsberg, R. Goodacre, J.J. Rowland and D.B. Kell, Classification of Pyrolysis Mass Spectra by Fuzzy Multivariate Rule Induction-comparison with Regression, K-nearest Neighbour, Neural and Decision-tree Methods. Analytica Chimica Acta, 348(1-3) (1997), 389 07. [Pg.408]

TABLE 13. K-nearest neighbour classification of 13 chemical classes from binary encoded infrared spectra tested with the leave-one-out method. The Taniraoto distance was used because it gave slightly better results than the Hamming distance. P... [Pg.161]

Alsberg, B. K., Goodacre, R., Rowland, J. J. and Kell, D. B. (1997) Classification of pyrolysis mass spectra by fuzzy multivariate rule induction comparison with regression, K-nearest neighbour, neural and decision-tree methods. Analytica Chimica Acta, in press. [Pg.369]

A number of different methods may be described as looking for nearest neighbours, e.g., cluster analysis (see Section 5.4), but in this book the term is applied to just one approach, k-nearest-neighbour. The starting point for the fe-nearest-neighbour technique (KNN) is the calculation of a distance matrix as required for non-linear mapping. Various distance measures may be used to express the similarity between compounds but the Euclidean distance, as defined in eqn (4.2) (reproduced below), is probably most common ... [Pg.90]

The method shows its ability in identifying correct fault classes and classifying the subsequent test data with a low misclassification rate compared to other fault diagnosis methods such as the K-Nearest Neighbour (KNN) method. [Pg.900]


See other pages where K-nearest neighbours method is mentioned: [Pg.223]    [Pg.263]    [Pg.249]    [Pg.701]    [Pg.238]    [Pg.274]    [Pg.75]    [Pg.152]    [Pg.221]    [Pg.223]    [Pg.263]    [Pg.249]    [Pg.701]    [Pg.238]    [Pg.274]    [Pg.75]    [Pg.152]    [Pg.221]    [Pg.180]    [Pg.14]    [Pg.124]    [Pg.271]    [Pg.130]    [Pg.327]    [Pg.29]    [Pg.156]    [Pg.62]    [Pg.324]    [Pg.274]    [Pg.52]    [Pg.142]    [Pg.197]    [Pg.329]    [Pg.331]    [Pg.186]   
See also in sourсe #XX -- [ Pg.208 , Pg.223 ]

See also in sourсe #XX -- [ Pg.62 ]




SEARCH



3-K method

K nearest neighbours

© 2024 chempedia.info