Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Winning node

Select the winning node, which is the node whose weights vector most strongly resembles the sample pattern. [Pg.60]

Update the weights vector at the winning node to make it slightly more like the sample pattern. [Pg.60]

Update the weights vectors of nodes in the neighborhood of the winning node. [Pg.60]

Since the node weights are initially seeded with random values, at the start of training no node is likely to be much like the input pattern. Although the match between pattern and weights vectors will be poor at this stage, determination of the winning node is simply a competition among nodes and the absolute quality of the match is unimportant. [Pg.64]

The next step is to adjust the vector at the winning node so that its weights become more like those of the sample pattern. [Pg.64]

As the adjustments at the winning node move each of its weights slightly toward the corresponding element of the sample vector, the node learns a little about this sample pattern and, thus, is more likely to again be the... [Pg.64]

The neighborhood of the winning node is a circular region centered on it (Figure 3.10). At the start of training, the neighborhood, whose size is chosen... [Pg.65]

The neighborhood around a winning node (shown shaded). [Pg.65]

The neighborhood is usually deemed to include the winning node. [Pg.65]

The size of the neighborhood around the winning node decreases as training progresses. [Pg.66]

In equation (3.4), f(d) is a function that describes how the size of the adjustment to the weights depends on the distance that a particular node in the neighborhood is from the winning node. This function might be the reciprocal of the distance between the winning node and a neighborhood node, measured across the lattice, or any other appropriate function that ensures that nearby nodes are treated differently from those that are far away. We shall consider several possible forms for this function in section 3.7.3. [Pg.67]

The result of training a one-dimensional SOM using a set of random angles. The final neighborhood includes only the winning node and one neighbor. [Pg.69]

This question of reproducibility is an important one we can understand why the lack of reproducibility is not the problem it might seem to be by considering how the SOM is used as a diagnostic tool. A sample pattern that the map has not seen before is fed in and the winning node (the node to which that sample "points") is determined. By comparing the properties of the unknown sample with patterns in the database that point to the same winning node or to one of the nodes nearby, we can learn the type of samples in the database that the unknown sample most strongly resembles. [Pg.69]

Let us do this using the map shown in Figure 3.14. We feed in some value that was not contained in the original sample dataset, say 79.1° and find the winning node, in other words, the node whose weight is closest to the value 79.1°. [Pg.71]

The size of the adjustment made to the node weights is determined by a neighborhood function. Using the simplest plausible function, the amount of adjustment could be chosen to fall off linearly with distance from the node (Figure 3.17). Beyond some cut-off distance from the winning node, no changes are made to the weights if this function is used. [Pg.73]

A linear neighborhood function x denotes the number of nodes to the right or left of the winning node. [Pg.74]

In the Mexican Hat function (Figure 3.20), the weights of the winning node and its close neighbors are adjusted to increase their resemblance to the sample pattern (an excitatory effect), but the weights of nodes that are... [Pg.75]

It is a common feature of most AI methods that flexibility exists in the way that we can run the algorithm. In the SOM, we can choose the shape and dimensionality of the lattice, the number of nodes, the initial learning rate and how quickly the rate diminishes with cycle number, the size of the initial neighborhood and how it too varies with the number of cycles, the type of function to be used to determine how the updating of weights varies with distance from the winning node, and the stopping criterion. [Pg.80]

Once a dimensionality for the map and the type of local measure to be used have been chosen, training can start. A sample pattern is drawn at random from the database and the sample pattern and the weights vector at each unit are compared. As in a conventional SOM, the winning node or BMU is the unit whose weights vector is most similar to the sample pattern, as measured by the squared Euclidean distance between the two. [Pg.102]

Once the weights at the winning node and its neighbors have been updated, another pattern is selected at random from the database and the BMU once again determined. The process continues for a large, predetermined number... [Pg.103]


See other pages where Winning node is mentioned: [Pg.63]    [Pg.63]    [Pg.64]    [Pg.64]    [Pg.65]    [Pg.65]    [Pg.65]    [Pg.66]    [Pg.67]    [Pg.67]    [Pg.67]    [Pg.69]    [Pg.71]    [Pg.73]    [Pg.74]    [Pg.75]    [Pg.76]    [Pg.81]    [Pg.83]    [Pg.88]    [Pg.91]    [Pg.102]    [Pg.381]    [Pg.382]    [Pg.257]   
See also in sourсe #XX -- [ Pg.6 ]

See also in sourсe #XX -- [ Pg.381 ]




SEARCH



Nodes

Update the Winning Node

Winning

© 2024 chempedia.info