Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

K-nearest neighbor method

Figure 2.7 Results of image segmentation by the K-nearest neighbor method (hard clustering approach) for a Raman emulsion image. Figure 2.7 Results of image segmentation by the K-nearest neighbor method (hard clustering approach) for a Raman emulsion image.
Hoffman, B., Cho, S.J., Zheng, W., Wyrick, S.D., Nichols, D.E., Mailman, R.B. and Tropsha, A. (1999) Quantitative structure-activity relationship modeling of dopamine D-1 antagonists using comparative molecular field analysis, genetic algorifhms-partial least-squares, and K nearest neighbor methods./. Mod. Chom., 42, 3217-3226. [Pg.1068]

Then the next step consists on application of multivariate statistical methods to find key features involving molecules, descriptors and anticancer activity. The methods include principal component analysis (PCA), hiererchical cluster analysis (HCA), K-nearest neighbor method (KNN), soft independent modeling of class analogy method (SIMCA) and stepwise discriminant analysis (SDA). The analyses were performed on a data matrix with dimension 25 lines (molecules) x 1700 columns (descriptors), not shown for convenience. For a further study of the methodology apphed there are standard books available such as (Varmuza FUzmoser, 2009) and (Manly, 2004). [Pg.188]

The supervised pattern recognition methods include K nearest neighbor method (KNN), principal component analysis (PCA), Fisher... [Pg.191]

Ajmani, S., Jadhav, K., 8c Kulkarni, S. A. (2006). Three-dimensional QSAR using the k-nearest neighbor method and its interpretation. The Journal of Chemical Information and Modeling, 46, 24. [Pg.1336]

Medina-Franco, J. L., Golbraikh, A., Oloff, S., Castillo, R., Tropsha, A. (2005). Quantitative structure-activity relationship analysis ofpyridi-none HIV-1 reverse transcriptase inhibitors using the k nearest neighbor method and QSAR-based database mining. The Journal of Computer-Aided Molecular Design, 19, 229. [Pg.1339]

In the original kNN method, an unknown object (molecule) is classified according to the majority of the class memberships of its K nearest neighbors in the training set (Fig. 13.4). The nearness is measured by an appropriate distance metric (a molecular similarity measure as applied to the classification of molecular structures). It is implemented simply as follows ... [Pg.314]

When applied to QSAR studies, the activity of molecule u is calculated simply as the average activity of the K nearest neighbors of molecule u. An optimal K value is selected by the optimization through the classification of a test set of samples or by the leave-one-out cross-validation. Many variations of the kNN method have been proposed in the past, and new and fast algorithms have continued to appear in recent years. The automated variable selection kNN QSAR technique optimizes the selection of descriptors to obtain the best models [20]. [Pg.315]

Next, supervised-learning pattern recognition methods were applied to the data set. The 111 bonds from these 28 molecules were classified as either breakable (36) or non-breakable (75), and a stepwise discriminant analysis showed that three variables, out of the six mentioned above, were particularly significant resonance effect, R, bond polarity, Qa, and bond dissociation energy, BDE. With these three variables 97.3% of the non-breakable bonds, and 86.1% of the breakable bonds could be correctly classified. This says that chemical reactivity as given by the ease of heterolysis of a bond is well defined in the space determined by just those three parameters. The same conclusion can be drawn from the results of a K-nearest neighbor analysis with k assuming any value between one and ten, 87 to 92% of the bonds could be correctly classified. [Pg.273]

Similarity Distance In the case of a nonlinear method such as the k Nearest Neighbor (kNN) QSAR [41], since the models are based on chemical similarity calculations, a large similarity distance could signal query compounds that are too dissimilar to the... [Pg.442]

Shen, M., Letiran, A., Xiao, Y., Golbraikh, A., Kohn, H., Tropsha, A. Quantitative structure-activity relationship analysis of functionalized amino acid anticonvulsant agents using k nearest neighbor and simulated annealing PLS methods./. Med. Chem. 2002, 45, 2811-2823. [Pg.455]

In the following discussion, three types of air pollutant analytical data will be examined using principal component analysis and the K-Nearest Neighbor (KNN) procedure. A set of Interlaboratory comparison data from X-ray emission trace element analysis, data from a comparison of two methods for determining lead In gasoline, and results from gas chromatography/mass spectrometry analysis for volatile organic compounds In ambient air will be used as Illustrations. [Pg.108]

In the distance methods, class assignment is based on the distance of the unknown to its k-nearest neighbors since the distances of the training set objects from each other are known, one can determine whether an unknown is not a member of the training sets. [Pg.249]

Supervised methods rely on some prior training of the system with objects known to belong to the class they define. Such methods can be of the discriminant or modeling types.11 Discriminant methods split the pattern space into as many regions as the classes encompassed by the training set and establish bounds that are shared by the spaces. These methods always classify an unknown sample as a specific class. The most common discriminant methods include discriminant analysis (DA),12 the K-nearest neighbor... [Pg.366]

Supervised learning methods - multivariate analysis of variance and discriminant analysis (MVDA) - k nearest neighbors (kNN) - linear learning machine (LLM) - BAYES classification - soft independent modeling of class analogy (SIMCA) - UNEQ classification Quantitative demarcation of a priori classes, relationships between class properties and variables... [Pg.7]

K-nearest neighbors (KNN) is the name of a classification method that is unsupervised in the sense that class membership is not used to train the technique, but that makes predictions of class membership. The principle behind KNN is an appealingly common sense one objects that are close together in the descriptor space are expected to show similar behavior in their response. Figure 7.5 shows a two-dimensional representation of the way that KNN operates. [Pg.171]


See other pages where K-nearest neighbor method is mentioned: [Pg.366]    [Pg.57]    [Pg.61]    [Pg.617]    [Pg.150]    [Pg.57]    [Pg.58]    [Pg.351]    [Pg.193]    [Pg.168]    [Pg.177]    [Pg.1097]    [Pg.366]    [Pg.57]    [Pg.61]    [Pg.617]    [Pg.150]    [Pg.57]    [Pg.58]    [Pg.351]    [Pg.193]    [Pg.168]    [Pg.177]    [Pg.1097]    [Pg.195]    [Pg.366]    [Pg.450]    [Pg.324]    [Pg.47]    [Pg.106]    [Pg.107]    [Pg.112]    [Pg.113]    [Pg.116]    [Pg.91]    [Pg.403]    [Pg.197]    [Pg.337]    [Pg.205]    [Pg.13]    [Pg.351]    [Pg.199]    [Pg.93]   
See also in sourсe #XX -- [ Pg.176 , Pg.255 ]

See also in sourсe #XX -- [ Pg.193 ]

See also in sourсe #XX -- [ Pg.340 ]

See also in sourсe #XX -- [ Pg.57 ]




SEARCH



3-K method

K nearest-neighbor

Nearest neighbors

Nearest neighbors methods

Neighbor

© 2024 chempedia.info