Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Boltzman Machines

Chapter 10 covers another important field with a great overlap with CA neural networks. Beginning with a short historical survey of what is really an independent field, chapter 10 discusses the Hopfield model, stochastic nets, Boltzman machines, and multi-layered perceptrons. [Pg.19]

The Boltzman Machine generalizes the Hopfield model in two ways (1) like the simple stochastic variant discussed above, it t(>o substitutes a stochastic update rule for Hopfield s deterministic dynamics, and (2) it separates the neurons in the net into sets of visible and hidden units. Figure 10.8 shows a Boltzman Machine in which the visible neurons have been further subdividetl into input and output sets. [Pg.532]

Fig. 10.8 A division of a set of neurons into input, output and hidden units in a Boltzman Machine. (For clarity, not all of the links between neurons are shown ju.st a-s for Hopfield nets, in general, Wij 0 for all i j. Fig. 10.8 A division of a set of neurons into input, output and hidden units in a Boltzman Machine. (For clarity, not all of the links between neurons are shown ju.st a-s for Hopfield nets, in general, Wij 0 for all i j.
We mentioned above that a typical problem for a Boltzman Machine is to obtain a set of weights such that the states of the visible neurons take on some desired probability distribution. For example, the task may he to teach the net to learn that the first component of an Ai-component input vector has value +1 40% of the time. To accompli.sh this, a Boltzman Machine uses the familiar gradient-descent technique, but not on the energy of the net instead, it maximizes the relative entropy of the system. [Pg.534]

On the other hand, we also have a set of desired probabilities P that we want the Boltzman Machine to learn. From elementary information theory, we know that the relative entropy... [Pg.535]

Equation 10.49 embodies Hinton, et.al. s Boltzman Machine learning scheme. Notice that it consists of two different parts. The first part, < SiSj >ciamped) is essentially the same as the Hebb rule used in Hopfield s net (equation 10.19), and reinforces the connections that lead from input to output. The second part, < SiSj >free> Can be likened to a Hebbian unlearning, whereby poor associations are effectively unlearned. [Pg.535]

Pseudo-Code Implementation The Boltzman Machine Learning Algorithm proceeds in two phases (1) a positive, or learning, ph2 se and (2) a negative, or unlearning, phtise. It is summarized below in pseudo-code. It is assumed that the visible neurons are further subdivided into input and output sets as shown schematically in figure 10.8. [Pg.535]


See other pages where Boltzman Machines is mentioned: [Pg.509]    [Pg.524]    [Pg.532]    [Pg.532]    [Pg.532]    [Pg.532]    [Pg.533]    [Pg.533]    [Pg.729]    [Pg.729]    [Pg.762]    [Pg.509]    [Pg.524]    [Pg.532]    [Pg.532]    [Pg.532]    [Pg.532]    [Pg.533]    [Pg.533]    [Pg.729]    [Pg.729]    [Pg.762]    [Pg.7]    [Pg.315]    [Pg.443]   


SEARCH



© 2024 chempedia.info