Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Agglomerative cluster analysis methods

The principle of unsupervised learning consists in the partition of a data set into small groups to reflect, in advance, unknown groupings [YARMUZA, 1980] (see also Section 5.3). The results of the application of methods of hierarchical agglomerative cluster analysis (see also [HENRION et al., 1987]) were representative of the large palette of mathematical algorithms in cluster analysis. [Pg.256]

One of the cluster analysis methods is the agglomerative method (Everitt et al. 2001). Using this approach, a distance threshold parameter is continuously increased. At the first step, the two nearest objects are identified, and at this stage, only their distance is below the threshold. These two objects are xmited and the location of the unified object is the arithmetic mean of the coordinates. By increasing the threshold further and further, objects are united until only one object remains. The similarity of the objects is indicated by the order of the aggregations. [Pg.328]

Blashfield, R. K. (1976). Mixture model tests of cluster analysis Accuracy of four agglomerative hierarchical methods. Psychological Bulletin, 83, 377-388. [Pg.178]

There are two main types of clustering techniques hierarchical and nonhierarchical. Hierarchical cluster analysis may follow either an agglomerative or a divisive scheme agglomerative techniques start with as many clusters as objects and, by means of repeated similarity-based fusion steps, they reach a final situation with a unique cluster containing all of the objects. Divisive methods follow exactly the opposite procedure they start from an all-inclusive cluster and then perform a number of consecutive partitions until there is a bijective correspondence between clusters and objects (see Fig. 2.12). In both cases, the number of clusters is defined by the similarity level selected. [Pg.82]

The outcome of agglomerative hierarchical cluster analysis is a crisp cluster membership function, which can take only the values 0 (no membership) or 1 (membership). Other non-hierarchical clustering techniques such as k-means cluster (KMC) analysis still follow this concept, whereas fuzzy C-means (FCM) clustering returns fuzzy class memberships. The latter method thus departs from the classical (0 or 1) two-valued logic and uses soft linguistic system variables, i.e. degrees of class membership values varying between 0 and 1. [Pg.211]

One of the most intuitive ways to describe how cluster analysis works in practice is by referring to the agglomerative hierarchical cluster analysis (HCA) method. Beside the common preliminary steps already discussed, that is definition of the metric (Euclidean, Mahalanobis, Manhattan distance, etc.) and calculation of the distance matrix and the corresponding similarity matrix, the analysis continues according to a recursive procedure such as... [Pg.133]

Agglomerative hierarchical clustering is one of the most utilized clustering methods for the analysis of biological data (Quackenbush, 2001 D Haeseleer,... [Pg.104]


See other pages where Agglomerative cluster analysis methods is mentioned: [Pg.509]    [Pg.60]    [Pg.331]    [Pg.12]    [Pg.13]    [Pg.401]    [Pg.352]    [Pg.275]    [Pg.61]    [Pg.302]    [Pg.565]    [Pg.493]    [Pg.267]    [Pg.92]    [Pg.104]    [Pg.305]    [Pg.281]    [Pg.284]    [Pg.445]    [Pg.41]    [Pg.13]    [Pg.130]    [Pg.324]    [Pg.479]    [Pg.370]    [Pg.187]    [Pg.151]    [Pg.71]    [Pg.169]    [Pg.300]   


SEARCH



Cluster analysis

Cluster method

Clustering agglomerative

Clustering) analysis

Method clustering

© 2024 chempedia.info