Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kohonen learning

Due to the Kohonen learning algorithm, the individual weight vectors in the Kohonen map are arranged and oriented in such a way that the structure of the input space, i.e. the topology is preserved as well as possible in the resulting... [Pg.691]

The result of unsupervised Kohonen learning, however, can also be used for classification. For this, an additional layer is introduced. The output information trained by the Kohonen net is further trained on the known patterns or class information by means of an associative learning law. After adjustment of the additional weights, the net can be subsequently applied for classifications. [Pg.319]

The aim of Kohonen learning is to map similar signals to similar neuron positions. The learning procedure is unsupervised competitive learning where in each cycle the neuron c is to be found with the output most similar to the input signal ... [Pg.828]

In Kohonen learning each input of an object X, triggers the following three actions which follow consecutively one after another ... [Pg.1818]

On the other hand, if the ANN is trained without targets Tj (unsupervised Kohonen learning) the RMS is calculated by comparing the input vectors X with the weights of the excited neurons, i.e., with the weight vectors describing the excited neurons... [Pg.1820]

Another application of mapping is if the frequency distribution serves for the selection of a certain sub-set of data with pre-specified distribution of objects. Some examples one would like to take the same number of objects from each group, or to find outliers and their closest regular objects, or to delete the objects close to certain region in the measurement space, etc. In many cases, the grouping as obtained by Kohonen learning ANN is used to test unknown objects and to find the positions of the excited neurons which provides the information about the nature of the unknown with the respect to the formed clusters. Such mappings are quite helpful in environmental studies as well, ... [Pg.1824]

An observation of the results of cross-validation revealed that all but one of the compounds in the dataset had been modeled pretty well. The last (31st) compound behaved weirdly. When we looked at its chemical structure, we saw that it was the only compound in the dataset which contained a fluorine atom. What would happen if we removed the compound from the dataset The quahty ofleaming became essentially improved. It is sufficient to say that the cross-vahdation coefficient in-CTeased from 0.82 to 0.92, while the error decreased from 0.65 to 0.44. Another learning method, the Kohonen s Self-Organizing Map, also failed to classify this 31st compound correctly. Hence, we had to conclude that the compound containing a fluorine atom was an obvious outlier of the dataset. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

The Kohonen network i.s a neural network which uses an unsupervised learning strategy. Sec Section 9.5,3 for a more detailed description. [Pg.455]

The Kohonen network adapts its values only with respect to the input values and thus reflects the input data. This approach is unsupervised learning as the adaptation is done with respect merely to the data describing the individual objects. [Pg.458]

A counter-propagation network is a method for supervised learning which can be used for prediction, It has a two-layer architecture where each netiron in the upper layer, the Kohonen layer, has a corresponding netiron in the lower layer, the output layer (sec Figure 9-21). A trained counter-propagation network can be used as a look-up tabic a neuron in one layer is used as a pointer to the other layer. [Pg.459]

Tt provides unsupervised (Kohonen network) and supervised (counter-propagation network) learning techniques with planar and toroidal topology of the network. [Pg.461]

As described in the Introduction to this volume (Chapter 28), neural networks can be used to carry out certain tasks of supervised or unsupervised learning. In particular, Kohonen mapping is related to clustering. It will be explained in more detail in Chapter 44. [Pg.82]

The training process of a Kohonen network consists of a competitive learning procedure and can be summarized as follows ... [Pg.688]

The output-activity map. A trained Kohonen network yields for a given input object, X, one winning unit, whose weight vector, is closest (as defined by the criterion used in the learning procedure) to x,. However, X may be close to the weight vectors, w, of other units as well. The output yj of the units of the map can also be defined as ... [Pg.690]

R. Goodacre, J. Pygall and D.B. Kell, Plant seed classification using pyrolysis mass spectrometry with unsupervised learning the application of auto-associative and Kohonen artificial neural networks. Chemom. Intell. Lab. Syst., 33 (1996) 69-83. [Pg.698]

The growing cell structure algorithm is a variant of a Kohonen network, so the GCS displays several similarities with the SOM. The most distinctive feature of the GCS is that the topology is self-adaptive, adjusting as the algorithm learns about classes in the data. So, unlike the SOM, in which the layout of nodes is regular and predefined, the GCS is not constrained in advance to a particular size of network or a certain lattice geometry. [Pg.98]

A self-organizing Kohonen map of the total database of cleaved retrosynthetic fragments generated as the result of an unsupervised learning procedure (data not shown)indicates that the cleaved fragments occupy a wide area on the map, characterized... [Pg.298]

It can be shown that the unsupervised learning methodology based on Kohonen self-organizing maps algorithm can be effectively used for differentiation between various receptor-specific groups of GPCR ligands. The method is similar to that described in Section 12.2.6. [Pg.307]


See other pages where Kohonen learning is mentioned: [Pg.687]    [Pg.123]    [Pg.484]    [Pg.346]    [Pg.342]    [Pg.99]    [Pg.1817]    [Pg.1818]    [Pg.1823]    [Pg.687]    [Pg.123]    [Pg.484]    [Pg.346]    [Pg.342]    [Pg.99]    [Pg.1817]    [Pg.1818]    [Pg.1823]    [Pg.464]    [Pg.193]    [Pg.207]    [Pg.441]    [Pg.499]    [Pg.347]    [Pg.350]    [Pg.555]    [Pg.366]    [Pg.383]    [Pg.21]    [Pg.185]    [Pg.298]    [Pg.307]    [Pg.27]    [Pg.257]    [Pg.573]    [Pg.323]   
See also in sourсe #XX -- [ Pg.3 , Pg.1817 ]




SEARCH



Kohonen

Kohonen learning rule

Unsupervised competitive Kohonen learning

© 2024 chempedia.info