Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Partitioning simulated annealing

Blower P, Fligner M, Verducci J, Bjoraker J. On combining recursive partitioning and simulated annealing to detect groups of biologically active compounds. J Chem Inf Comput Sci 2002 42 393-404. [Pg.373]

This approach to organizing HTS data has evolved, with novel methods being developed to identify larger associations of substructures with activity, using recursive partitioning and simulated annealing to further optimize the possible associations (34). [Pg.92]

The simulated annealing algorithm (37) for the partitional clustering as we required in this work was designed based on the following problem formation. [Pg.47]

Many different methods have been developed for compound selection. They include clustering, dissimilarity-based compound selection, partitioning a collection of compounds into a low-dimensional space and the use of optimization methods such as simulated annealing and genetic algorithms. Filtering techniques are often employed prior to compound selection to remove undesirable compounds. [Pg.351]

Partitioning is most appropriate when one is only interested in the subsets or clusters, while hierarchical decomposition is most applicable when one seeks to show similarity relationships between clusters. Section 2.1 formalizes the combinatorics of the partitional strategy and Section 2.2 does the same for hierarchical methods. The formulations we derive here provide the basis for the application of the simulated annealing algorithm to the underl5dng optimization problem as we show in Section 3. [Pg.136]

To conduct this comparison we need a formal method for evaluation. The next subsection describes the Jaccard and Rand scores as external criteria for evaluating clustering performance. Following that we provide results from applying simulated annealing to both partitional and hierarchical clustering of the data from this example domain... [Pg.146]

To compare the two criteria we examined 32 data sets where each data set has between 150 and 600 objects, split into 20 true clusters on the average. We applied simulated annealing partitional clustering 5 times to each set with both B(p) and W(p). We then compared the best test results for W(p) and B(p) over the 32 data sets. Table 1 shows for each data set the best Jaccard score for each of the criteria. In addition, for W(p) it shows the best number of clusters parameter k in 18, 19, 20, 21, 22. Similarly, for B(p) the table shows the best median distance parameter v in the set 2.0, 2.5, 3.0, 3.5, 4.0 and the associated number of clusters in the final partitioning. At the bottom of the table is the minimum, maximum, and mean of each column. [Pg.149]

We applied three hierarchical clustering methods to the same 32 data sets used in the previous section for partitional clustering (1) simulated annealing with the criterion in equation (11) (2) simulated annealing with the criterion in equation (12) and (3) Ward s algorithm. Each method generated a separate dendrogram for each data set. [Pg.151]

These results show clearly the importance of the optimization criterion to clustering. The computationally simple Ward s method performs better than the simulated annealing approach with a simplistic criterion. However, a criterion that more correctly accounts for the hierarchy, by minimizing the sum of squared error at each level, performs much better. As with partitional clustering the application of simulated annealing to hierarchical clustering requires careful selection of the internal clustering criterion. [Pg.151]

Lastly we demonstrated the use of simulated annealing on examples from multi-sensor data fusion. These examples showed the effectiveness of simulated annealing in performing both hierarchical and partitional clustering. They also showed the importance of the internal criterion to the results obtained. [Pg.153]

Our results also demonstrated how simulated annealing can help choose the most appropriate internal criterion. In both the partitional and hierarchical cases clustering performance was dramatically affected by this choice. In the partitional case our testing results showed that Barker s criterion outperformed within-cluster distance. In fact, the worst Jaccard score for Barker s criterion was better than the average Jaccard score for within-cluster distance. [Pg.153]

Constraints can be placed on the area-cost and pinout-cost of each block, and on the overall latency. Two partitioning algorithms are supported one using simulated annealing, and one based on the Kemighan-Lin algorithm. The cost functions for these algorithms consider the communication costs, latency, area, and pinout. [Pg.129]


See other pages where Partitioning simulated annealing is mentioned: [Pg.364]    [Pg.412]    [Pg.359]    [Pg.253]    [Pg.133]    [Pg.137]    [Pg.137]    [Pg.141]    [Pg.141]    [Pg.142]    [Pg.144]    [Pg.149]    [Pg.152]    [Pg.152]    [Pg.153]    [Pg.153]    [Pg.155]    [Pg.483]    [Pg.94]    [Pg.333]    [Pg.14]    [Pg.25]    [Pg.27]    [Pg.2188]    [Pg.171]    [Pg.362]    [Pg.67]   
See also in sourсe #XX -- [ Pg.26 , Pg.129 ]




SEARCH



Simulated Annealing

Simulating annealing

© 2024 chempedia.info