Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training Kohonen maps

An interesting example of using Kohonen maps for the analysis of protein sequences is given in a journal article by Hanke (Hanke Reich, 1996). In this application, a trained Kohonen map network was used to identify protein families, aligned sequences or segments of similar secondary structure, in a highly visual manner. [Pg.49]

Figure 11 t/-matrix of trained Kohonen map for the PCs of all para-substituted 1,4-dihydropyridines... [Pg.315]

There are finer details to be extracted from such Kohonen maps that directly reflect chemical information, and have chemical significance. A more extensive discussion of the chemical implications of the mapping of the entire dataset can be found in the original publication [28]. Gearly, such a map can now be used for the assignment of a reaction to a certain reaction type. Calculating the physicochemical descriptors of a reaction allows it to be input into this trained Kohonen network. If this reaction is mapped, say, in the area of Friedel-Crafts reactions, it can safely be classified as a feasible Friedel-Qafts reaction. [Pg.196]

One application of clustering could, for example, be the comparison of compound libraries A training set is chosen which contains members of both libraries. After the structures are coded (cf. Chapter 8), a Kohonen network (cf. Section 9.5.3) is trained and arranges the structures within the Kohonen map in relation to their structural similarity. Thus, the overlap between the two different libraries of compounds can be determined. [Pg.473]

The output-activity map. A trained Kohonen network yields for a given input object, X, one winning unit, whose weight vector, is closest (as defined by the criterion used in the learning procedure) to x,. However, X may be close to the weight vectors, w, of other units as well. The output yj of the units of the map can also be defined as ... [Pg.690]

Sammon maps and Kohonen maps have been used with a consistent training set (93 molecules) and test set (35 molecules) to compare the classification... [Pg.363]

FIGURE 6.26 The Kohonen map of a network trained with 64 absorbance valnes from the initial range of atomization signals of different compounds shows a clear separation into four areas of atomization processes. The only conflict occurs with thermal dissociation of metal carbides and metal dimers. The map indicates the central neurons for metals from an independent test set. [Pg.215]

The following procedure should be carried out for Kohonen network training and map generation. [Pg.29]

Run the learning process. After the training is accomplished the resulting Kohonen map can be readily formed, the model can be saved and further used for testing and visualizing other objects on the same map. [Pg.30]

Alternatively, the Kohonen map can just be used to identify the combustion state with respect to others previously known (used at the training stage). The process of assigning a winner neuron to a new input image yields a particular location in the map that can coincide with a particular combustion regime, or can be used to find the most similar regimes as those located at short distances. The comparison between estimated and actual NO emissions serves also as an indication of the good accuracy of the SOFM as a flame identification tool. [Pg.346]

After training the Kohonen network, the entire dataset was sent through the Kohonen network marking the neurons containing a hit in magenta and those populated with inactive compounds with red colour. Whenever both hits and inactive compounds were assigned to the same neuron the most frequently observed compound class in this particular neuron determines its colour. The Kohonen maps shown in Plate 1 are the top view on the Kohonen network. Each neinon is represented by a little square. [Pg.138]

In the second step, a Kohonen network was trained with the training data. A rectangular topology of the network with a size of 48 X 38 = 1824 neurons was chosen, i.e. the ratio of data points to the number of neurons is about 2 1. The resulting Kohonen map for the training set is depicted in Plate 3 (see plate section). [Pg.139]

Plate 3 Kohonen maps obtained by representing molecules by autocorrelation of the hydrogen bonding potential, (a) Training set (b) filter obtained by recolouring (see text) (c) test set. [Pg.139]

The Kohonen neural networks were chosen to prepare a relevant model for fast selection of the most suitable phase equilibrium method(s) to be used in efficient vapor-liquid chemical process design and simulation. They were trained to classify the objects of the study (the known physical properties and parameters of samples) into none, one or more possible classes (possible methods of phase equilibrium) and to estimate the reliability of the proposed classes (adequacy of different methods of phase equilibrium). Out of the several ones the Kohonen network architecture yielding the best separation of clusters was chosen. Besides the main Kohonen map, maps of physical properties and parameters, and phase equilibrium probability maps wo e obtained with horizontal intersections of the neural network. A proposition of phase equilibrium methods is represented with the trained neural network. [Pg.827]

Figure 2 The main Kohonen map of 70x70 neural network trained through 900 epochs. Figure 2 The main Kohonen map of 70x70 neural network trained through 900 epochs.
Finally, one class of unsupervised methods is represented by self-organising maps (SOM), or Kohonen maps, named after the Finnish professor Teuvo Kohonen. A SOM is a type of artificial neural network that needs to be trained but does not require labelling of the input vectors. Examples of classification analysis by SOMs in biomedical IR and Raman spectroscopy are given in references. ... [Pg.213]

Self-Organizing Maps (SOMs) or Kohonen maps are types of Artificial Neural Networks (ANNs) that are trained using supervised/unsupervised learning to produce a low-dimensional discretized representation (typically 2-dimensional) of an arbitrary dimension of input space of the training samples (Zhong et al. 2005). [Pg.896]

Training the map that is based on three principles repeated for NI iterations (Kohonen 1997, Araujo Barreto 2002) ... [Pg.896]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

The Kohonen Self-Organizing Maps can be used in a. similar manner. Suppose Xj., k = 1,. Nis the set of input (characteristic) vectors, Wy, 1 = 1,. l,j = 1,. J is that of the trained network, for each (i,j) cell of the map N is the number of objects in the training set, and 1 and j are the dimensionalities of the map. Now, we can compare each with the Wy of the particular cell to which the object was allocated. This procedure will enable us to detect the maximal (e max) minimal ( min) errors of fitting. Hence, if the error calculated in the way just mentioned above is beyond the range between e and the object probably does not belong to the training population. [Pg.223]


See other pages where Training Kohonen maps is mentioned: [Pg.425]    [Pg.458]    [Pg.688]    [Pg.691]    [Pg.98]    [Pg.99]    [Pg.307]    [Pg.113]    [Pg.110]    [Pg.136]    [Pg.184]    [Pg.93]    [Pg.101]    [Pg.309]    [Pg.364]    [Pg.553]    [Pg.33]    [Pg.342]    [Pg.365]    [Pg.250]    [Pg.320]    [Pg.322]    [Pg.829]    [Pg.2399]    [Pg.2401]    [Pg.1042]    [Pg.128]    [Pg.427]    [Pg.613]    [Pg.555]   
See also in sourсe #XX -- [ Pg.62 ]




SEARCH



Kohonen

Kohonen maps

© 2024 chempedia.info