Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nearest neighbour classification

D. Coomans and D.L. Massart, Alternative K-nearest neighbour rules in supervised pattern recognition. Part 1 K-nearest neighbour classification by using alternative voting mles. Anal. Chim. Acta, 136, 15-27 (1982). [Pg.486]

The basic idea underlying nearest-neighbour methods is conceptually very simple, and in practice it is mathematically simple to implement. The general method is based on applying the so-called AT-nearest neighbour classification rule, usually referred to as AT-NN. The distance between the pattern vector of... [Pg.138]

This is a common form of the nearest-neighbour classification rule and assigns a new, unclassified object to that group that contains the majority of its nearest neighbours. [Pg.140]

Nearest neighbours, classification, 138 clustering, 103, 107 Neural networks, 147 Noise, 31... [Pg.215]

The preservation of the local distances can be evaluated by using the K-nearest neighbour classification (Chapter 3) and comparing the results for the d-dimensional patterns and for the 2-dimensional representations. [Pg.97]

K-nearest neighbour classification, linear regression, simplex optimization. [Pg.153]

TABLE 13. K-nearest neighbour classification of 13 chemical classes from binary encoded infrared spectra tested with the leave-one-out method. The Taniraoto distance was used because it gave slightly better results than the Hamming distance. P... [Pg.161]

Table 5.2 Nearest-neighbour classification for NMR (12 features) data set (from Kowalski and Bender 1972, copyright (1972) American Chemical Society)... Table 5.2 Nearest-neighbour classification for NMR (12 features) data set (from Kowalski and Bender 1972, copyright (1972) American Chemical Society)...
Table 5.5 Nearest-neighbour classification of glycosides (from Goux and Weber 1993, with permission of Elsevier Science)... Table 5.5 Nearest-neighbour classification of glycosides (from Goux and Weber 1993, with permission of Elsevier Science)...
In order to tackle the issues of non-linearity and scaling, a second version of DAISY was recoded from scratch and implemented (Figure 7.1). This new version (NNC/NVD DAISY) was based on nearest-neighbour classification (NNC), a simple, yet very powerful classification scheme first... [Pg.103]

The number of occurrences of a certain substructure in the hitlist is compared with the corresponding number for the library and a probability is derived for the presence of that substructure in the unknown. This classification method is a variant of the well-known -nearest neighbour classification . Each mass spectrum is considered as a point in a multidimensional space the neighbours nearest to the spectrum of the unknown correspond to the most similar reference spectra in library search. If the majority of k neighbours (k is typically between 1 and 10) contain a certain substructure then this substructure is predicted to be present in the unknown. A drawback of this approach is the high computational effort necessary for classifying an unknown because a full library search is required. The performance has been described by Stein (1995, see Further reading section) as sufficient to recommend it for routine use as a first step in structure elucidation . [Pg.241]


See other pages where Nearest neighbour classification is mentioned: [Pg.40]    [Pg.464]    [Pg.14]    [Pg.144]    [Pg.146]    [Pg.62]    [Pg.91]    [Pg.104]    [Pg.416]   
See also in sourсe #XX -- [ Pg.110 ]




SEARCH



K Nearest neighbours classification

© 2024 chempedia.info