Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Tree methods feature trees

Descriptor requirements present a significant difference between MP and decision tree methods such as RP. Whereas two-state descriptors are not suitable for MP, these types of descriptors are typically required for decision tree algorithms because at each branch the presence or absence of specific feature(s) must be detected in order to recursively divide a molecular dataset. [Pg.298]

This subset was further investigated using the slower modeling methods to try to identify potential actives, known as plausible hits. An example of a molecule selected from the results of a docking experiment is shown in Fig. 4.8. This molecule had a similarity score of 0.93 to an active and is shown docked with the typical kinase inhibitor binding pattern. Both the active and the plausible hit are not drug-like from a medicinal chemistry perspective, but this example demonstrates well how the Feature Tree descriptor captures similarity between two molecules. [Pg.95]

The Feature Tree fragment spaces screening concept was exemplified for a number of known actives across different target classes in the paper introducing the general concept [9], Starting from a single active compound as the query structure, the ability of the search method to construct and identify so-called... [Pg.105]

This phase classifies chemicals passing from the previous phase into active and inactive categories. Three structural alerts (Section IV.B), seven pharmacophore queries (Section IV.C), and the Decision Tree classification model (Section IV.D) were used in parallel to discriminate active from inactive chemicals. To ensure the lowest false negative rate in this phase, a chemical predicted to be active by any of these 11 models is subsequently evaluated in Phase III, whereas only those predicted to inactive by all these models are eliminated for further evaluation. Since structural alert, pharamacophore and Decision Tree methods incorporate and weight differently the various structural features that endow a chemical with the ability to bind the ER the combined outputs derived... [Pg.312]

Several methods are suited for navigation within such huge chemical spaces [56-59]. An extension of the above-mentioned feature tree descriptor makes it possible to search large virtual combinatorial libraries without enumeration [38]. [Pg.74]

The so-called feature tree fragment space (FTree-FS) method uses dynamic programming techniques and allows the detection of the molecule with the highest similarity to the query that can be found in the fragment space. The efficiency of searches in fragment spaces versus searches in product spaces can be easily shown. [Pg.74]

A compromise between the 2D structural and 3D pharmacophore descriptors are techniques which implement reduced graph methods, whereby pharmacophoric elements are encoded in the 2D structure, thus gaining the benefit of a description which is more biologically relevant, whilst not adding the complication and noise from 3D conformations. Examples are the CATS, Reduced Graph,and Feature Trees approaches. [Pg.370]

The key part of the ASF is the design of a partial-synthesis function that can take any feature combination and map it onto the chosen acoustic space. The most common way of doing this is to use the decision-tree method, in more or less the same way as in FIMM approaches (see Section 15.1.9). Since context accounts for a significant level of variance within a phone model, using separate phone models for each possible context greatly reduces the overall variance of the models. The problem faced, however, is that, while many of the required models have few or no observations in the training data, their parameters still have to be estimated. The similarity to our problem can now be seen if... [Pg.494]

An extension of feature trees by attributes as mentioned by Benavides [26] extends the expressivity of feature models (see Fig. 17.5). Attributes provide a method to model additional information within the feature model. With appropriate reasoning methods these attributes can be used in the analysis process and the visualization. Examples for such attributes are costs, weight, and speed. There exist reasoning procedures for analyzing attributed feature trees and for optimizing in terms of the specified attributes. [Pg.500]


See other pages where Tree methods feature trees is mentioned: [Pg.274]    [Pg.38]    [Pg.291]    [Pg.292]    [Pg.81]    [Pg.93]    [Pg.94]    [Pg.108]    [Pg.108]    [Pg.109]    [Pg.111]    [Pg.486]    [Pg.151]    [Pg.161]    [Pg.87]    [Pg.221]    [Pg.13]    [Pg.17]    [Pg.587]    [Pg.419]    [Pg.958]    [Pg.15]    [Pg.23]    [Pg.65]    [Pg.72]    [Pg.272]    [Pg.505]    [Pg.495]    [Pg.110]    [Pg.224]    [Pg.173]    [Pg.167]    [Pg.178]    [Pg.276]    [Pg.707]    [Pg.72]    [Pg.913]    [Pg.720]    [Pg.538]    [Pg.130]   
See also in sourсe #XX -- [ Pg.185 , Pg.370 ]




SEARCH



Feature tree

© 2024 chempedia.info