Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nearest neighbors methods

In this study, a machine learning model system was developed to classify cell line chemosensitivity exclusively based on proteomic profiling. Using reverse-phase protein lysate microarrays, protein expression levels were measured by 52 antibodies in a panel of 60 human cancer cell (NCI-60) lines. The model system combined several well-known algorithms, including Random forests, Relief, and the nearest neighbor methods, to construct the protein expression-based chemosensitivity classifiers. [Pg.293]

Medina-Franco, J. L., Golbraikh, A., Oloff, S., Castillo, R., Tropsha, A. (2005) Quantitative structure-activity relationship analysis of pyridinone HIV-1 reverse transcriptase inhibitors using the nearest neighbor method and QSAR-based database mining. J Comput Aided Mol Des 19, 229-242. [Pg.131]

There are also disadvantages of the nearest-neighbor methods. First, they (like neural networks) do not simplify the distribution of objects in parameter... [Pg.197]

Yi TM, Lander ES. Protein secondary structure prediction using nearest neighbor methods. J. Mol. Biol. 1993 232 1117-1129. de Dios AC, Pearson JG, Oldfield E. Secondary and tertiary structural effects on protein NMR chemical shifts an ab initio approach. Science 1993 260 1491-1496. [Pg.27]

Figure 2.7 Results of image segmentation by the K-nearest neighbor method (hard clustering approach) for a Raman emulsion image. Figure 2.7 Results of image segmentation by the K-nearest neighbor method (hard clustering approach) for a Raman emulsion image.
NMR spectra were also characterized for —> similarity/diversity analysis [Zuperl, Pristovsek et al., 2007] by —> graph invariants obtained using the sequential nearest neighbor method proposed to characterize —> proteomics maps [Randic, Novic et al., 2005]. Moreover, selected chemical shifts were directly used as molecular descriptors for modeling lipophilicity of alcohols [Khadikar, Sharma et al., 2005b]. [Pg.714]

Hoffman, B., Cho, S.J., Zheng, W., Wyrick, S.D., Nichols, D.E., Mailman, R.B. and Tropsha, A. (1999) Quantitative structure-activity relationship modeling of dopamine D-1 antagonists using comparative molecular field analysis, genetic algorifhms-partial least-squares, and K nearest neighbor methods./. Mod. Chom., 42, 3217-3226. [Pg.1068]

Then the next step consists on application of multivariate statistical methods to find key features involving molecules, descriptors and anticancer activity. The methods include principal component analysis (PCA), hiererchical cluster analysis (HCA), K-nearest neighbor method (KNN), soft independent modeling of class analogy method (SIMCA) and stepwise discriminant analysis (SDA). The analyses were performed on a data matrix with dimension 25 lines (molecules) x 1700 columns (descriptors), not shown for convenience. For a further study of the methodology apphed there are standard books available such as (Varmuza FUzmoser, 2009) and (Manly, 2004). [Pg.188]

A simple nonparametric classification method is the nearest neighbor method as introduced by Fix and Hodges in 1951. For classification of an unknown object, its distance, usually the Euclidian distance, is computed to all objects. The minimum distance is selected and the object is assigned to the corresponding class. [Pg.193]

A great deal of interest exists in methods for variable selection as well as for model evaluation, which are actually two sides of the same coin. Bayesian neural networks include a procedure called automatic relevance determination (ARD), allowing for the identification of important variables.A -nearest neighbor method for variable selection has been applied successfully to problems of biological activity and metabolic stability. Other... [Pg.340]

Since nearest neighbor methods are based on similarity measured by some distance metric then variable scaling and the units used to characterize the data can influence results. Variables with the largest amount of scatter (greatest variance) will contribute most strongly to the Euclidean distance and in practice it may be advisable to standardize variables before performing classification analysis. [Pg.586]

Due to variety of specifications of the cases, the two retrieval techniques are combined. First, using inductive method, a set of appropriate cases is retrieved, and then only the cases in the set are considered. Next, the cases in the set are ranked according to theirs similarity to the target case using nearest neighbor method. [Pg.114]

The similarity between the target case and all the source cases is calculated using nearest neighbor method. The evaluation of the global similarity between the target and a source case is based on the computation of the local similarities. The local similarity deals with a single attribute, and takes the value from the interval [0 1]. Thus, from the local similarities the global similarity can be derived as ... [Pg.115]


See other pages where Nearest neighbors methods is mentioned: [Pg.366]    [Pg.57]    [Pg.61]    [Pg.301]    [Pg.405]    [Pg.44]    [Pg.197]    [Pg.198]    [Pg.199]    [Pg.308]    [Pg.617]    [Pg.150]    [Pg.13]    [Pg.57]    [Pg.199]    [Pg.10]    [Pg.18]    [Pg.18]    [Pg.67]    [Pg.60]    [Pg.58]    [Pg.351]    [Pg.11]    [Pg.19]    [Pg.19]    [Pg.193]    [Pg.390]    [Pg.168]    [Pg.177]    [Pg.422]    [Pg.115]    [Pg.30]   
See also in sourсe #XX -- [ Pg.53 , Pg.62 , Pg.67 ]

See also in sourсe #XX -- [ Pg.53 , Pg.62 , Pg.67 ]




SEARCH



K-nearest neighbor method

Method nearest-neighbor distance

Nearest neighbors

Neighbor

© 2024 chempedia.info