Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kohonen networks

The Kohonen network or self-organizing map (SOM) was developed by Teuvo Kohonen [11]. It can be used to classify a set of input vectors according to their similarity. The result of such a network is usually a two-dimensional map. Thus, the Kohonen network is a method for projecting objects from a multidimensional space into a two-dimensional space. This projection keeps the topology of the multidimensional space, i.e., points which are close to one another in the multidimensional space are neighbors in the two-dimensional space as well. An advantage of this method is that the results of such a mapping can easily be visualized. [Pg.456]


Next, the architecture of the Kohonen network had to be chosen. With seven descriptors for each reaction, a network of neurons with seven weights had to be... [Pg.194]

Reactions belonging to the same reaction type are projected into coherent areas on the Kohonen map this shows that the assignment of reaction types by a chemist is also perceived by the Kohonen network on the basis of the electronic descriptors. This attests to the power of this approach. [Pg.196]

There are finer details to be extracted from such Kohonen maps that directly reflect chemical information, and have chemical significance. A more extensive discussion of the chemical implications of the mapping of the entire dataset can be found in the original publication [28]. Gearly, such a map can now be used for the assignment of a reaction to a certain reaction type. Calculating the physicochemical descriptors of a reaction allows it to be input into this trained Kohonen network. If this reaction is mapped, say, in the area of Friedel-Crafts reactions, it can safely be classified as a feasible Friedel-Qafts reaction. [Pg.196]

Kohonen networks, also known as self-organizing maps (SOMs), belong to the large group of methods called artificial neural networks. Artificial neural networks (ANNs) are techniques which process information in a way that is motivated by the functionality of biological nervous systems. For a more detailed description see Section 9.5. [Pg.441]

Kohonen network Conceptual clustering Principal Component Analysis (PCA) Decision trees Partial Least Squares (PLS) Multiple Linear Regression (MLR) Counter-propagation networks Back-propagation networks Genetic algorithms (GA)... [Pg.442]

The Kohonen network i.s a neural network which uses an unsupervised learning strategy. Sec Section 9.5,3 for a more detailed description. [Pg.455]

Besides the artihcial neural networks mentioned above, there are various other types of neural networks. This chapter, however, will confine itself to the three most important types used in chemoinformatics Kohonen networks, counter-propagation networks, and back-propagation networks. [Pg.455]

In this illustration, a Kohonen network has a cubic structure where the neurons are columns arranged in a two-dimensional system, e.g., in a square of nx I neurons. The number of weights of each neuron corresponds to the dimension of the input data. If the input for the network is a set of m-dimensional vectors, the architecture of the network is x 1 x m-dimensional. Figure 9-18 plots the architecture of a Kohonen network. [Pg.456]

The training of a Kohonen network is performed in three steps. [Pg.456]

The Kohonen network adapts its values only with respect to the input values and thus reflects the input data. This approach is unsupervised learning as the adaptation is done with respect merely to the data describing the individual objects. [Pg.458]

Tutorial Application of a Kohonen Network for the Classification of Olive Oils using ELECTRAS [9]... [Pg.458]

In this example, a Kohonen network is used to classify Italian olive oils on the basis of the concentrations of fatty acids they contain. [Pg.458]

Select the network type Kohonen network. Transfer selection by pressing the button "submission . [Pg.458]

Click the Select network parameter button and choose the Kohonen network pai ameters topology, width and height of the network, neuron dimension, the index of the class identifier, and the number of training cycles,... [Pg.458]

The architecture of a counter-propagation network resembles that of a Kohonen network, but in addition to the cubic Kohonen layer (input layer) it has an additional layer, the output layer. Thus, an input object consists of two parts, the m-dimeiisional input vector (just as for a Kohonen network) plus a second k-dimensional vector with the properties for the object. [Pg.459]

During training the input layer is adapted as in a regular Kohonen network, i.c., the winning neuron is determined only on the basis of the input values. But in contra.st to the training of a Kohonen network, the output layer is also adapted, which gives an opportunity to use the network for prediction. [Pg.460]

Tt provides unsupervised (Kohonen network) and supervised (counter-propagation network) learning techniques with planar and toroidal topology of the network. [Pg.461]

The left-hand side gives the Kohonen network, which can be investigated by clicking on the neuron. The contents of the neuron, here the chemical structures, are shown in an additional window plotted on the right-hand side of the figure. [Pg.461]

The usage of a neural network varies depending on the aim and especially on the network type. This tutorial covers two applications on the one hand the usage of a Kohonen network for classification, and on the other hand the prediction of object properties with a counter-propagation network,... [Pg.463]

Kohonen network Counter-propagation Back-propagation... [Pg.465]

The GA was then applied to select those descriptors which give the best classification of the structures when a Kohonen network is used. The objeetive function was based on the quality of the classification done by a neural network for the I educed descriptors. [Pg.472]

In clustering, data vectors are grouped together into clusters on the basis of intrinsic similarities between these vectors. In contrast to classification, no classes are defined beforehand. A commonly used method is the application of Kohonen networks (cf. Section 9.5.3). [Pg.473]

One application of clustering could, for example, be the comparison of compound libraries A training set is chosen which contains members of both libraries. After the structures are coded (cf. Chapter 8), a Kohonen network (cf. Section 9.5.3) is trained and arranges the structures within the Kohonen map in relation to their structural similarity. Thus, the overlap between the two different libraries of compounds can be determined. [Pg.473]

This reaction data set of 626 reactions was used as a training data set to produce a knowledge base. Before this data set is used as input to a neural Kohonen network, each reaction must be coded in the form of a vector characterizing the reaction event. Six physicochemical effects were calculated for each of five bonds at the reaction center of the starting materials by the PETRA (see Section 7.1.4) program system. As shown in Figure 10,3-3 with an example, the physicochemical effects of the two regioisomeric products arc different. [Pg.546]

One of the reactions is projected in that part of the Kohonen network where mostly reactions leading to the preferred regioisomer pyrazole were projected. The other reaction was projected in neuron (7,7), which lies in a region where reactions with low yield are projected. [Pg.548]

Fig. 44.22. Three commonly used Kohonen network structures, (a) One-dimensional array (b) two-dimensional rectangular network (each unit, apart from the borderline units has 8 neighbours) and (c) two-dimensional hexagonal network (each unit, apart from the borderline units, has 6 neighbours). (Reprinted with permission from Ref. [70]). Fig. 44.22. Three commonly used Kohonen network structures, (a) One-dimensional array (b) two-dimensional rectangular network (each unit, apart from the borderline units has 8 neighbours) and (c) two-dimensional hexagonal network (each unit, apart from the borderline units, has 6 neighbours). (Reprinted with permission from Ref. [70]).
The training process of a Kohonen network consists of a competitive learning procedure and can be summarized as follows ... [Pg.688]

Fig. 44.23. Some common neighbourhood functions, used in Kohonen networks, (a) a block function, (b) a triangular function, (c) a Gaussian-bell function and (d) a Mexican-hat shaped function. In each of the diagrams is the winning unit situated at the centre of the abscissa. The horizontal axis represents the distance, r, to the winning unit. The vertical axis represents the value of the neighbourhood function. (Reprinted with permission from [70]). Fig. 44.23. Some common neighbourhood functions, used in Kohonen networks, (a) a block function, (b) a triangular function, (c) a Gaussian-bell function and (d) a Mexican-hat shaped function. In each of the diagrams is the winning unit situated at the centre of the abscissa. The horizontal axis represents the distance, r, to the winning unit. The vertical axis represents the value of the neighbourhood function. (Reprinted with permission from [70]).
Fig. 44.25. Example of the weight update of the winning unit in a Kohonen network. (Reprinted with permission from Ref [70]). Fig. 44.25. Example of the weight update of the winning unit in a Kohonen network. (Reprinted with permission from Ref [70]).
The output-activity map. A trained Kohonen network yields for a given input object, X, one winning unit, whose weight vector, is closest (as defined by the criterion used in the learning procedure) to x,. However, X may be close to the weight vectors, w, of other units as well. The output yj of the units of the map can also be defined as ... [Pg.690]

M. Walkenstein, H. Hotter, C. Mittermayr, W. Schiesser and M. Grasserbauer, Classification of SIMS images using a Kohonen network. Anal. Chem., 69 (1997) 777-782. [Pg.698]


See other pages where Kohonen networks is mentioned: [Pg.193]    [Pg.441]    [Pg.450]    [Pg.456]    [Pg.457]    [Pg.458]    [Pg.463]    [Pg.547]    [Pg.613]    [Pg.361]    [Pg.683]    [Pg.687]    [Pg.687]    [Pg.690]   
See also in sourсe #XX -- [ Pg.424 , Pg.441 , Pg.455 , Pg.471 , Pg.546 ]

See also in sourсe #XX -- [ Pg.649 , Pg.687 ]

See also in sourсe #XX -- [ Pg.249 ]

See also in sourсe #XX -- [ Pg.98 ]

See also in sourсe #XX -- [ Pg.381 ]

See also in sourсe #XX -- [ Pg.13 ]

See also in sourсe #XX -- [ Pg.249 ]

See also in sourсe #XX -- [ Pg.68 , Pg.86 , Pg.100 ]

See also in sourсe #XX -- [ Pg.75 , Pg.86 ]




SEARCH



Kohonen

Kohonen Neural Networks — The Classifiers

Kohonen neural network multilayer

Kohonen neural networks

Kohonen neural networks applications

Kohonen self-organizing Neural Network

Self-organizing feature maps network Kohonen networks

Training Kohonen neural networks

© 2024 chempedia.info