Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

K nearest-neighbor, KNN

A similarity-related approach is k-nearest neighbor (KNN) analysis, based on the premise that similar compounds have similar properties. Compounds are distributed in multidimensional space according to their values of a number of selected properties the toxicity of a compound of interest is then taken as the mean of the toxicides of a number (k) of nearest neighbors. Cronin et al. [65] used KNN to model the toxicity of 91 heterogeneous organic chemicals to the alga Chlorella vulgaris, but found it no better than MLR. [Pg.481]

Similarity Distance In the case of a nonlinear method such as the k Nearest Neighbor (kNN) QSAR [41], since the models are based on chemical similarity calculations, a large similarity distance could signal query compounds that are too dissimilar to the... [Pg.442]

In the following discussion, three types of air pollutant analytical data will be examined using principal component analysis and the K-Nearest Neighbor (KNN) procedure. A set of Interlaboratory comparison data from X-ray emission trace element analysis, data from a comparison of two methods for determining lead In gasoline, and results from gas chromatography/mass spectrometry analysis for volatile organic compounds In ambient air will be used as Illustrations. [Pg.108]

The nearest-neighbor (NN) approach relates the property of a query compound, Yq, to the properties of k nearest-neighbor (kNN) compounds selected from a database. The general model is... [Pg.21]

Supervised learning methods - multivariate analysis of variance and discriminant analysis (MVDA) - k nearest neighbors (kNN) - linear learning machine (LLM) - BAYES classification - soft independent modeling of class analogy (SIMCA) - UNEQ classification Quantitative demarcation of a priori classes, relationships between class properties and variables... [Pg.7]

K-nearest neighbors (KNN) is the name of a classification method that is unsupervised in the sense that class membership is not used to train the technique, but that makes predictions of class membership. The principle behind KNN is an appealingly common sense one objects that are close together in the descriptor space are expected to show similar behavior in their response. Figure 7.5 shows a two-dimensional representation of the way that KNN operates. [Pg.171]

For illustration, we shall consider here one of the nonlinear variable selection methods that adopts a k-Nearest Neighbor (kNN) principle to QSAR [kNN-QSAR (49)]. Formally, this method implements the active analog principle that lies in the foundation of the modern medicinal chemistry. The kNN-QSAR method employs multiple topological (2D) or topographical (3D) descriptors of chemical structures and predicts biological activity of any compound as the average activity of k most similar molecules. This method can be used to analyze the structure-activity relationships (SAR) of a large number of compounds where a nonlinear SAR may predominate. [Pg.62]

The k-Nearest Neighbors (kNN) algorithm predicts the class of a point in a feature space based on the known attributes of the neighboring k number of points in this space. For instance, say we want to predict tissue status based on a set of independent bioimpedance features, such as the Cole parameters. Using a dataset of measurements with known tissue states, the kNN algorithm will first construct class-labeled vectors in a multidimensional space with one dimension for each feature. Class prediction of a new measurement will then be done based on the majority of class-memberships of the k number of nearest neighbors based on the distance (usually Euclidian) to the new point. [Pg.386]

A) The k-nearest neighbor (KNN) method B) SIMCA method C) Quadratic Bayesian classification D) Linear Bayesian classification E) ALLOC method... [Pg.57]

In the study of Moss et al. [34], several machine learning methods, including the K-nearest-neighbor (KNN) regression, single-layer... [Pg.359]

The classification by Fisher method is also clear-cut, but both the result of LOO cross-validation test of Fisher method and the results of K-Nearest Neighbor (KNN) methods (k=l, 3 or 5) cause some misclassification, while the result of the LOO cross-validation test of SVC (linear kernel, C =100) shows 100% rate of correctness. [Pg.226]


See other pages where K nearest-neighbor, KNN is mentioned: [Pg.393]    [Pg.110]    [Pg.240]    [Pg.289]    [Pg.337]    [Pg.205]    [Pg.319]    [Pg.305]    [Pg.53]    [Pg.582]    [Pg.59]    [Pg.392]    [Pg.64]    [Pg.128]    [Pg.293]    [Pg.274]    [Pg.177]    [Pg.106]    [Pg.214]    [Pg.249]    [Pg.238]    [Pg.57]    [Pg.230]    [Pg.700]    [Pg.309]    [Pg.895]    [Pg.895]    [Pg.206]   
See also in sourсe #XX -- [ Pg.126 ]




SEARCH



K nearest-neighbor

Nearest neighbors

Neighbor

© 2024 chempedia.info