Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training Kohonen neural networks

Classifying and Proposing Phase Equilibrium Methods with Trained Kohonen Neural Network... [Pg.827]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

Figure 8-1J. Training ofa Kohonen neural network with a chirality code, The number of weights in a neuron is the same as the number of elements in the chirality code vector, When a chirality code is presented to the network, the neuron with the most similar weights to the chirality code is excited (this is the ivinning or central neuron) (see Section 9.5,3),... Figure 8-1J. Training ofa Kohonen neural network with a chirality code, The number of weights in a neuron is the same as the number of elements in the chirality code vector, When a chirality code is presented to the network, the neuron with the most similar weights to the chirality code is excited (this is the ivinning or central neuron) (see Section 9.5,3),...
Counterpropagation (CPG) Neural Networks are a type of ANN consisting of multiple layers (i.e., input, output, map) in which the hidden layer is a Kohonen neural network. This model eliminates the need for back-propagation, thereby reducing training time. [Pg.112]

Training a Kohonen neural network with a molecular descriptor and a spectrum vector models the rather complex relationship between a molecule and an infrared spectrum. This relationship is stored in the Kohonen network by assigning the weights through a competitive learning technique from a suitable training set of... [Pg.179]

We have seen that RDF descriptors are one-dimensional representations of the 3D structure of a molecule. A classification of molecular structures containing characteristic structural features shows how the descriptor preserves effectively the 3D structure information. For this experiment, Cartesian RDF descriptors were calculated for a mixed data set of 100 benzene derivatives and 100 cyclohexane derivatives. Each compound was assigned to one of these classes, and a Kohonen neural network was trained with these data. The task for the Kohonen network was to classify the compounds according to their Cartesian RDF descriptors. [Pg.191]

The Kohonen neural networks were chosen to prepare a relevant model for fast selection of the most suitable phase equilibrium method(s) to be used in efficient vapor-liquid chemical process design and simulation. They were trained to classify the objects of the study (the known physical properties and parameters of samples) into none, one or more possible classes (possible methods of phase equilibrium) and to estimate the reliability of the proposed classes (adequacy of different methods of phase equilibrium). Out of the several ones the Kohonen network architecture yielding the best separation of clusters was chosen. Besides the main Kohonen map, maps of physical properties and parameters, and phase equilibrium probability maps wo e obtained with horizontal intersections of the neural network. A proposition of phase equilibrium methods is represented with the trained neural network. [Pg.827]

Not all neural networks are the same their connections, elemental functions, training methods and applications may differ in significant ways. The types of elements in a network and the connections between them are referred to as the network architecture. Commonly used elements in artificial neural networks will be presented in Chapter 2. The multilayer perception, one of the most commonly used architectures, is described in Chapter 3. Other architectures, such as radial basis function networks and self organizing maps (SOM) or Kohonen architectures, will be described in Chapter 4. [Pg.17]

Once the network is trained, the topological map represents a classification sheet. Some or all of the units in the topological map may be labeled with class names. If the distance is small enough, then the case is assigned to the class. A new vector presented to the Kohonen network ends up in its central neuron in the topological map layer. The central neuron points to the corresponding neuron in the output layer. A CPG neural network is able to evaluate the relationships between input and output information and to make predictions for missing output information. [Pg.108]

Figure 2 The main Kohonen map of 70x70 neural network trained through 900 epochs. Figure 2 The main Kohonen map of 70x70 neural network trained through 900 epochs.
Using Kohonen unsupervised learning it was possible to classify phase equilibrium methods on the basis of different combinations of physical properties and parameters. The trained neural network can estimate the reliability of appropriate phase equilibrium methods. It can be used similarly as expert system. Because of all weights trained it gives results also in situations for which it was not learned - there exist more than 3000 unlabeled neurons with weights trained. This is an advantage over classical expert systems which, in the best case, can only warn the user against unsolvable situations. [Pg.832]

Finally, one class of unsupervised methods is represented by self-organising maps (SOM), or Kohonen maps, named after the Finnish professor Teuvo Kohonen. A SOM is a type of artificial neural network that needs to be trained but does not require labelling of the input vectors. Examples of classification analysis by SOMs in biomedical IR and Raman spectroscopy are given in references. ... [Pg.213]

Self-Organizing Maps (SOMs) or Kohonen maps are types of Artificial Neural Networks (ANNs) that are trained using supervised/unsupervised learning to produce a low-dimensional discretized representation (typically 2-dimensional) of an arbitrary dimension of input space of the training samples (Zhong et al. 2005). [Pg.896]


See other pages where Training Kohonen neural networks is mentioned: [Pg.692]    [Pg.123]    [Pg.346]    [Pg.139]    [Pg.553]    [Pg.361]    [Pg.365]    [Pg.829]    [Pg.343]    [Pg.163]    [Pg.497]    [Pg.530]    [Pg.307]    [Pg.56]    [Pg.113]    [Pg.367]    [Pg.18]    [Pg.51]    [Pg.136]    [Pg.165]    [Pg.184]    [Pg.190]    [Pg.93]    [Pg.99]    [Pg.309]    [Pg.364]    [Pg.249]    [Pg.107]    [Pg.178]    [Pg.30]    [Pg.113]    [Pg.1932]    [Pg.322]    [Pg.2039]    [Pg.331]    [Pg.341]    [Pg.157]   
See also in sourсe #XX -- [ Pg.107 ]




SEARCH



Kohonen

Kohonen network

Kohonen neural networks

Neural Kohonen

Neural network

Neural networking

Training network

Training neural network

© 2024 chempedia.info