Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kohonen layer

A counter-propagation network is a method for supervised learning which can be used for prediction, It has a two-layer architecture where each netiron in the upper layer, the Kohonen layer, has a corresponding netiron in the lower layer, the output layer (sec Figure 9-21). A trained counter-propagation network can be used as a look-up tabic a neuron in one layer is used as a pointer to the other layer. [Pg.459]

The architecture of a counter-propagation network resembles that of a Kohonen network, but in addition to the cubic Kohonen layer (input layer) it has an additional layer, the output layer. Thus, an input object consists of two parts, the m-dimeiisional input vector (just as for a Kohonen network) plus a second k-dimensional vector with the properties for the object. [Pg.459]

In two dimensions, the nodes occupy the vertices of a regular lattice, which is usually rectangular (Figure 3.5). This layer of nodes is sometimes known as a Kohonen layer in recognition of Teuvo Kohonen s (a Finnish academician and researcher) work in developing the SOM. [Pg.57]

An enhanced concept of Kohonen networks is the CPG nenral network, hrst introduced by Hecht-Nielsen [64], The CPG network can be established by nsing basically a Kohonen layer and an additional ontpnt layer. The inpnt layer contains the input objects (e.g., molecular descriptors). The output layer contains the variables to be predicted, such as a one- or mnltidimensional property of the corresponding molecules. Additionally, a topological map layer [65,66] may be added that contains classes for the individnal test cases (Fignre 4.15). [Pg.107]

When the training is finished, a topological map layer is generated and colored in the order of classes as defined by the user. By clicking on a colored square in the topological map, the contents can be investigated in a separate window. This window contains a preview of the input (Kohonen) layer, the output layer (if a property vector was used), and the three-dimensional model of the corresponding molecules for which the descriptor has been calculated. [Pg.157]

As a result of learning, the patterns are arranged in clusters, presupposed the data vectors can be grouped in a Kohonen layer. The clusters can be explored now for assigning objects to them. [Pg.319]

Learning vector quantization (LVQ) is a snpervised learning techniqne invented by Teuvo Kohonen (1988 1990). The LVQ network is the precnrsor of the selforganizing map NN. Both of them are based on the Kohonen layer, which is capable of sorting items into categories of similar objects with the aid of training samples, and are widely used for classification. [Pg.30]

Hybrid networks combine the features of two or more types of ANN, the idea being to highlight the strengths and minimize the weaknesses of the different networks. Examples are the Hamming network, which has a perceptron-like and a Hopfield-like layei and the counterpropagation network, which has a Kohonen layer and a Grossberg outstar layer. [Pg.87]

Figure 7 A simple counterpropagation network see Figure 5 for an explanation of the labels. The Kohonen layer is fully connected to the input layer, and the output layer is fully connected to the Kohonen layer. Figure 7 A simple counterpropagation network see Figure 5 for an explanation of the labels. The Kohonen layer is fully connected to the input layer, and the output layer is fully connected to the Kohonen layer.
The counterpropagation network-was originallyproposedby Hecht-Nilsen (1987). In this section a modified feedforward version as described by Zurada (1992) is discussed. This network, which is shown in Fig. 19.25, requires numbers of hidden neurons equal to the number of input patterns, or more exactly, to the number of input clusters. The first layer is known as the Kohonen layer with unipolar neurons. In this layer only one neuron, the winner, can be active. The second is the Grossberg outstar layer. The Kohonen layer can be trained in the unsupervised mode, but that need not be the case. When binary input patterns are considered, then the input weights must be exactly equal to the input patterns. In this case,... [Pg.2050]

Figure 1 Architecture of a CPG neural network a column of the input block constitutes a neuron having as many weights as there are variables for the structure code. The input block consists of a Kohonen layer. The output block constitutes a lookup table. A neuron of the output block has as many weights as there are data for an IR spectrum... Figure 1 Architecture of a CPG neural network a column of the input block constitutes a neuron having as many weights as there are variables for the structure code. The input block consists of a Kohonen layer. The output block constitutes a lookup table. A neuron of the output block has as many weights as there are data for an IR spectrum...
Figure 9 A counterpropagation ANN has two layers of neurons one Kohonen and one output layer. In the Kohonen layer to which objects X, are input the most excited neuron is selected. Corrections of weights are made around the position of the excited neuron (bold arrow) in the Kohonen and in the output layer using equations (8) and (9), respectively... Figure 9 A counterpropagation ANN has two layers of neurons one Kohonen and one output layer. In the Kohonen layer to which objects X, are input the most excited neuron is selected. Corrections of weights are made around the position of the excited neuron (bold arrow) in the Kohonen and in the output layer using equations (8) and (9), respectively...

See other pages where Kohonen layer is mentioned: [Pg.530]    [Pg.107]    [Pg.178]    [Pg.677]    [Pg.678]    [Pg.318]    [Pg.30]    [Pg.95]    [Pg.97]    [Pg.98]    [Pg.124]    [Pg.2051]    [Pg.1818]    [Pg.1818]    [Pg.1819]    [Pg.1819]    [Pg.2794]   
See also in sourсe #XX -- [ Pg.57 , Pg.90 ]

See also in sourсe #XX -- [ Pg.87 , Pg.95 , Pg.97 ]




SEARCH



Kohonen

© 2024 chempedia.info