Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kohonen

Next, the architecture of the Kohonen network had to be chosen. With seven descriptors for each reaction, a network of neurons with seven weights had to be... [Pg.194]

Figure 3-20. Distribution of the dataset of 120 reactions in the Kohonen netv/ork, a) The neurons were patterned on the basis of intellectually assigned reaction types b) in addition, empty neurons were patterned on the basis of their k nearest neighbors. Figure 3-20. Distribution of the dataset of 120 reactions in the Kohonen netv/ork, a) The neurons were patterned on the basis of intellectually assigned reaction types b) in addition, empty neurons were patterned on the basis of their k nearest neighbors.
Reactions belonging to the same reaction type are projected into coherent areas on the Kohonen map this shows that the assignment of reaction types by a chemist is also perceived by the Kohonen network on the basis of the electronic descriptors. This attests to the power of this approach. [Pg.196]

There are finer details to be extracted from such Kohonen maps that directly reflect chemical information, and have chemical significance. A more extensive discussion of the chemical implications of the mapping of the entire dataset can be found in the original publication [28]. Gearly, such a map can now be used for the assignment of a reaction to a certain reaction type. Calculating the physicochemical descriptors of a reaction allows it to be input into this trained Kohonen network. If this reaction is mapped, say, in the area of Friedel-Crafts reactions, it can safely be classified as a feasible Friedel-Qafts reaction. [Pg.196]

An observation of the results of cross-validation revealed that all but one of the compounds in the dataset had been modeled pretty well. The last (31st) compound behaved weirdly. When we looked at its chemical structure, we saw that it was the only compound in the dataset which contained a fluorine atom. What would happen if we removed the compound from the dataset The quahty ofleaming became essentially improved. It is sufficient to say that the cross-vahdation coefficient in-CTeased from 0.82 to 0.92, while the error decreased from 0.65 to 0.44. Another learning method, the Kohonen s Self-Organizing Map, also failed to classify this 31st compound correctly. Hence, we had to conclude that the compound containing a fluorine atom was an obvious outlier of the dataset. [Pg.206]

As oversimplified cases of the criterion to be used for the clustering of datasets, we may consider some high-quality Kohonen maps, or PCA plots, or hierarchical clustering. [Pg.208]

This format was developed in our group and is used fruitfully in SONNIA, software for producing Kohonen Self Organizing Maps (KSOM) and Coimter-Propaga-tion (CPG) neural networks for chemical application [6]. This file format is ASCII-based, contains the entire information about patterns and usually comes with the extension "dat . [Pg.209]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

Initially the dataset contained 818 compounds, among which 31 were active (high TA, low USE), 157 inactive (low TA, high USE), and the rest intermediate. When the complete dataset was employed, none of the active compounds and 47 of the inactives were correctly classified by using Kohonen self-organizing maps (KSOM). [Pg.221]

The Kohonen Self-Organizing Maps can be used in a. similar manner. Suppose Xj., k = 1,. Nis the set of input (characteristic) vectors, Wy, 1 = 1,. l,j = 1,. J is that of the trained network, for each (i,j) cell of the map N is the number of objects in the training set, and 1 and j are the dimensionalities of the map. Now, we can compare each with the Wy of the particular cell to which the object was allocated. This procedure will enable us to detect the maximal (e max) minimal ( min) errors of fitting. Hence, if the error calculated in the way just mentioned above is beyond the range between e and the object probably does not belong to the training population. [Pg.223]

Figure 8-1J. Training ofa Kohonen neural network with a chirality code, The number of weights in a neuron is the same as the number of elements in the chirality code vector, When a chirality code is presented to the network, the neuron with the most similar weights to the chirality code is excited (this is the ivinning or central neuron) (see Section 9.5,3),... Figure 8-1J. Training ofa Kohonen neural network with a chirality code, The number of weights in a neuron is the same as the number of elements in the chirality code vector, When a chirality code is presented to the network, the neuron with the most similar weights to the chirality code is excited (this is the ivinning or central neuron) (see Section 9.5,3),...
To understand neural networks, especially Kohonen, counter-propagation and back-propagation networks, and their applications... [Pg.439]

Kohonen networks, also known as self-organizing maps (SOMs), belong to the large group of methods called artificial neural networks. Artificial neural networks (ANNs) are techniques which process information in a way that is motivated by the functionality of biological nervous systems. For a more detailed description see Section 9.5. [Pg.441]

Kohonen network Conceptual clustering Principal Component Analysis (PCA) Decision trees Partial Least Squares (PLS) Multiple Linear Regression (MLR) Counter-propagation networks Back-propagation networks Genetic algorithms (GA)... [Pg.442]

The Kohonen network i.s a neural network which uses an unsupervised learning strategy. Sec Section 9.5,3 for a more detailed description. [Pg.455]

Besides the artihcial neural networks mentioned above, there are various other types of neural networks. This chapter, however, will confine itself to the three most important types used in chemoinformatics Kohonen networks, counter-propagation networks, and back-propagation networks. [Pg.455]

The Kohonen network or self-organizing map (SOM) was developed by Teuvo Kohonen [11]. It can be used to classify a set of input vectors according to their similarity. The result of such a network is usually a two-dimensional map. Thus, the Kohonen network is a method for projecting objects from a multidimensional space into a two-dimensional space. This projection keeps the topology of the multidimensional space, i.e., points which are close to one another in the multidimensional space are neighbors in the two-dimensional space as well. An advantage of this method is that the results of such a mapping can easily be visualized. [Pg.456]

In this illustration, a Kohonen network has a cubic structure where the neurons are columns arranged in a two-dimensional system, e.g., in a square of nx I neurons. The number of weights of each neuron corresponds to the dimension of the input data. If the input for the network is a set of m-dimensional vectors, the architecture of the network is x 1 x m-dimensional. Figure 9-18 plots the architecture of a Kohonen network. [Pg.456]

The training of a Kohonen network is performed in three steps. [Pg.456]

The Kohonen network adapts its values only with respect to the input values and thus reflects the input data. This approach is unsupervised learning as the adaptation is done with respect merely to the data describing the individual objects. [Pg.458]

Tutorial Application of a Kohonen Network for the Classification of Olive Oils using ELECTRAS [9]... [Pg.458]

In this example, a Kohonen network is used to classify Italian olive oils on the basis of the concentrations of fatty acids they contain. [Pg.458]

Select the network type Kohonen network. Transfer selection by pressing the button "submission . [Pg.458]

Click the Select network parameter button and choose the Kohonen network pai ameters topology, width and height of the network, neuron dimension, the index of the class identifier, and the number of training cycles,... [Pg.458]

Analyze the Kohonen map. The content of the neurons is given when clicking on the map. [Pg.458]

Figure 9-20. Left Kohonen map showing the projection ofthe olive oil samples. Middle Map of Italy showing the regions of origin for the olive oils, Right Key giving the regions and their codes,... Figure 9-20. Left Kohonen map showing the projection ofthe olive oil samples. Middle Map of Italy showing the regions of origin for the olive oils, Right Key giving the regions and their codes,...
A counter-propagation network is a method for supervised learning which can be used for prediction, It has a two-layer architecture where each netiron in the upper layer, the Kohonen layer, has a corresponding netiron in the lower layer, the output layer (sec Figure 9-21). A trained counter-propagation network can be used as a look-up tabic a neuron in one layer is used as a pointer to the other layer. [Pg.459]

The architecture of a counter-propagation network resembles that of a Kohonen network, but in addition to the cubic Kohonen layer (input layer) it has an additional layer, the output layer. Thus, an input object consists of two parts, the m-dimeiisional input vector (just as for a Kohonen network) plus a second k-dimensional vector with the properties for the object. [Pg.459]

During training the input layer is adapted as in a regular Kohonen network, i.c., the winning neuron is determined only on the basis of the input values. But in contra.st to the training of a Kohonen network, the output layer is also adapted, which gives an opportunity to use the network for prediction. [Pg.460]

Tt provides unsupervised (Kohonen network) and supervised (counter-propagation network) learning techniques with planar and toroidal topology of the network. [Pg.461]

The left-hand side gives the Kohonen network, which can be investigated by clicking on the neuron. The contents of the neuron, here the chemical structures, are shown in an additional window plotted on the right-hand side of the figure. [Pg.461]

The usage of a neural network varies depending on the aim and especially on the network type. This tutorial covers two applications on the one hand the usage of a Kohonen network for classification, and on the other hand the prediction of object properties with a counter-propagation network,... [Pg.463]

Kohonen network Counter-propagation Back-propagation... [Pg.465]

The GA was then applied to select those descriptors which give the best classification of the structures when a Kohonen network is used. The objeetive function was based on the quality of the classification done by a neural network for the I educed descriptors. [Pg.472]

In clustering, data vectors are grouped together into clusters on the basis of intrinsic similarities between these vectors. In contrast to classification, no classes are defined beforehand. A commonly used method is the application of Kohonen networks (cf. Section 9.5.3). [Pg.473]

One application of clustering could, for example, be the comparison of compound libraries A training set is chosen which contains members of both libraries. After the structures are coded (cf. Chapter 8), a Kohonen network (cf. Section 9.5.3) is trained and arranges the structures within the Kohonen map in relation to their structural similarity. Thus, the overlap between the two different libraries of compounds can be determined. [Pg.473]

Ill T. Kohonen, SelfOr nizing Maps, Springer, Berlin, 1997. [Pg.484]


See other pages where Kohonen is mentioned: [Pg.464]    [Pg.193]    [Pg.207]    [Pg.425]    [Pg.427]    [Pg.441]    [Pg.450]    [Pg.456]    [Pg.457]    [Pg.458]    [Pg.463]    [Pg.464]   
See also in sourсe #XX -- [ Pg.193 , Pg.206 , Pg.499 , Pg.530 ]

See also in sourсe #XX -- [ Pg.3 , Pg.347 ]

See also in sourсe #XX -- [ Pg.250 , Pg.320 , Pg.324 , Pg.327 , Pg.470 ]




SEARCH



Kohonen Neural Networks — The Classifiers

Kohonen algorithm

Kohonen feature maps

Kohonen layer

Kohonen learning

Kohonen learning rule

Kohonen mapping

Kohonen maps

Kohonen network

Kohonen neural nets

Kohonen neural network multilayer

Kohonen neural networks

Kohonen neural networks applications

Kohonen rule

Kohonen s Self-Organizing Map

Kohonen self-organized maps

Kohonen self-organized maps SOMs)

Kohonen self-organizing Neural Network

Kohonen self-organizing map

Neural Kohonen

Self-organizing feature maps network Kohonen networks

Training Kohonen maps

Training Kohonen neural networks

Unsupervised competitive Kohonen learning

© 2024 chempedia.info