Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Clustering fuzzy

As with A -means clustering, the fuzzy A -means technique is iterative and seeks to minimize the within-cluster sum of squares. Our data matrix is defined by the elements x/y and we seek K clusters, not by hard partitioning of the variable space, but by fuzzy partitions, each of which has a cluster centre or prototype value, k K). [Pg.123]

The algorithm starts with a pre-selected number of clusters, K. In addition, an initial fuzzy partition of the objects is supplied such that there are no empty [Pg.123]

The algorithm proceeds by calculating the AT-weighted means to determine cluster centres. [Pg.124]

New fuzzy partitions are then defined by a new set of membership functions given by. [Pg.124]

From this new partitioning, new cluster centres are calculated by applying Equation 4.35, and the process repeated until the total change in values of the membership functions is less than some pre-selected value, or a set number of iterations has been achieved. [Pg.124]

As is well known, Cluster Analysis involves the classification of objects into categories. Since most categories have vague boundaries, and may even overlap, the necessity of introducing fuzzy sets is obvious. A discussion of Fuzzy Clustering must refer to the following issues  [Pg.273]

Input data is obtained from measurements on the objects that are to be recognized. Each object is represented as a vector of measured values X = (xi,X2. Xs), where x, is a particular characteristic of the object. [Pg.273]

Due to the large number of characteristics, there is a need to extract the most relevant characteristics from the input data, so that the amount of information lost is minimal, and the classification realized with the projected data set is relevant with respect to the original data. In order to achieve this feature extraction, different statistical techniques, as well as the fuzzy clustering algorithms outlined here, may be used. [Pg.273]

Pattern recognition techniques based on fuzzy objective function minimization use objective functions particular to different cluster shapes. Ways to approach the problem of correctly identifying the cluster s shape are the use of adaptive distances in a second run to change the shapes of the produced clusters so that all are unit spheres, and adaptive algorithms that dynamically change the local metrics during the iterative procedure in the original run, without the need of a second run. [Pg.274]

Another problem with such algorithms is that of determining the optimal number of classes that correspond to the cluster substructure of the data set. There are two approaches The use of validity functionals, which is a postfactum method, and the use of hierarchical algorithms, which produce not only the optimal number of classes (based on the needed granularity), but also a binary hierarchy that shows the existing relationships between the classes. [Pg.274]

The principal aim of performing a cluster analysis is to permit the identification of similar samples according to their measured properties. Hierarchical techniques, as we have seen, achieve this by linking objects according to some formal rule set. The K-means method on the other hand seeks to partition the pattern space containing the objects into an optimal predefined number of [Pg.115]

The degree or extent to which an object, i, belongs to a specific cluster, k, is referred to as that object s membership function, denoted Thus, visual [Pg.117]


New developments which have still to be checked for their usability in data evaluation of depth profiles are artificial neural networks [2.16, 2.21-2.25], fuzzy clustering [2.26, 2.27] and genetic algorithms [2.28]. [Pg.21]

All these methods and the methods of the preceding section have one characteristic in common an object may be part of only one cluster. Fuzzy clustering applies other principles. It permits objects to be part of more than one cluster. This leads to results such as those illustrated by Fig. 30.16. Each object i is given a value... [Pg.80]

Kavuri, S. N., and Venkatasubramanian, V., Using fuzzy clustering with ellipsoidal units in neural networks for robust fault classification, Comput. Chem. Eng. 17(8), 765 (1993). [Pg.99]

Partitioning methods make a crisp or hard assignment of each object to exactly one cluster. In contrast, fuzzy clustering allows for a fuzzy assignment meaning that an observation is not assigned to exclusively one cluster but at some part to all clusters. This fuzzy assignment is expressed by membership coefficients m(/ for each... [Pg.280]

FIGURE 6.13 Fuzzy clustering uses membership coefficients to assign each object with varying probabilities to all k clusters. [Pg.280]

FIGURE 6.14 Example with two groups connected by a bridge of some objects (left) and resulting membership coefficients from fuzzy clustering for the left-hand side cluster. [Pg.281]

The solution for model-based clustering is based on the Expectation Maximi-zation (EM) algorithm. It uses the likelihood function and iterates between the expectation step (where the group memberships are estimated) and the maximization step (where the parameters are estimated). As a result, each object receives a membership to each cluster like in fuzzy clustering. The overall cluster result can be evaluated by the value of the negative likelihood function which should be as small as possible. This allows judging which model for the clusters is best suited (spherical clusters, elliptical clusters) and which number of clusters, k, is most appropriate. [Pg.282]

FIGURE 6.20 Cluster validities for the Hyptis data for two to nine clusters analyzed with the methods fc-means clustering, fuzzy clustering, and model-based clustering. For the left plot the original data were used, for the right plot the data were autoscaled. [Pg.288]

Fuzzy clustering, original data Fuzzy clustering, scaled data... [Pg.291]

FIGURE 6.23 Fuzzy clustering with six clusters for the original (left) and scaled (right) Hyptis data. The plot symbols correspond to the found clusters with the largest membership coefficient, and their size is proportional to this coefficient. The results are presented in the PCA projection obtained from the original data (Figure 6.19, left). [Pg.291]

Fuzzy clustering gives, in addition to the group membership for each object, a probability for belonging to one of the found clusters also for this method the number of clusters has to be defined in advance. [Pg.294]

Rassokhin, D., Lobanov, V. S., and Agrafiotis, D. K. (2000) Nonlinear mapping of massive data sets by fuzzy clustering and neural networks../. Comput. Chem. 21, 1-14. [Pg.49]

Fuzzy clustering methods that have recently become popular are distinct from traditional clustering techniques in that molecules are permitted to belong to multiple clusters or have fractional membership in all clusters. A potential advantage of such classification schemes is that more than one similarity relationship can be established by cluster analysis. [Pg.13]

Technische Universitat Wien Does latent class analysis, short-time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier, etc. (http //cran.r-project.org/ web/packages/el071/index.html)... [Pg.24]

Comput. Sci., 36 (6), 1195 (1996). Algorithm5 A Technique for Fuzzy Clustering of Chemical Inventories. [Pg.37]

The partition coefficient is a measure of the quality of a fuzzy partition. The closer C(P) is to 1, the better the fuzzy partition P will be. The outputs of a fuzzy clustering algorithm for several different values of n may be compared by means of the partition coefficient. The best partition (and the best n) is that associated with the highest partition coefficient value. [Pg.338]

Bandemer considered the role of fuzzy set theory in analytical chemistry. The applications they described focused on pattern recognition problems, the calibration of analytical methods,quality control, and component identification and mixture evaluation. Gordon and Somorjai applied a fuzzy clustering technique to the detection of similarities among protein substructures. A molecular dynamics trajectory of a protein fragment was analyzed. In the following subsections, some applications based on the hierarchical fuzzy clustering techniques presented in this chapter are reviewed. [Pg.348]


See other pages where Clustering fuzzy is mentioned: [Pg.81]    [Pg.82]    [Pg.86]    [Pg.372]    [Pg.691]    [Pg.265]    [Pg.268]    [Pg.280]    [Pg.280]    [Pg.281]    [Pg.288]    [Pg.289]    [Pg.132]    [Pg.135]    [Pg.298]    [Pg.208]    [Pg.3]    [Pg.5]    [Pg.18]    [Pg.206]    [Pg.405]    [Pg.23]    [Pg.140]    [Pg.337]    [Pg.339]   
See also in sourсe #XX -- [ Pg.21 ]

See also in sourсe #XX -- [ Pg.80 , Pg.81 , Pg.82 ]

See also in sourсe #XX -- [ Pg.2 , Pg.160 ]

See also in sourсe #XX -- [ Pg.13 ]

See also in sourсe #XX -- [ Pg.3 , Pg.5 , Pg.18 ]

See also in sourсe #XX -- [ Pg.23 , Pg.140 , Pg.325 , Pg.326 , Pg.327 , Pg.328 , Pg.331 , Pg.335 , Pg.336 , Pg.337 , Pg.338 , Pg.339 , Pg.340 , Pg.341 , Pg.342 , Pg.343 , Pg.344 , Pg.345 , Pg.346 , Pg.347 , Pg.348 , Pg.351 , Pg.352 ]

See also in sourсe #XX -- [ Pg.122 ]

See also in sourсe #XX -- [ Pg.160 ]

See also in sourсe #XX -- [ Pg.2 , Pg.160 ]

See also in sourсe #XX -- [ Pg.180 , Pg.181 ]

See also in sourсe #XX -- [ Pg.22 ]

See also in sourсe #XX -- [ Pg.57 ]

See also in sourсe #XX -- [ Pg.273 , Pg.278 , Pg.306 , Pg.318 ]

See also in sourсe #XX -- [ Pg.2 , Pg.1097 ]




SEARCH



Fuzziness

Fuzzy

© 2024 chempedia.info