Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hopfield

Beratan D N and Hopfield J J 1984 Calculation of electron tunneling matrix elements in rigid systems mixed valence dithiaspirocyclobutane molecules J. Am. Chem. Soc. 106 1584-94... [Pg.2995]

Recurrent networks are based on the work of Hopfield and contain feedback paths. Figure 10.23 shows a single-layer fully-connected recurrent network with a delay (z l in the feedback path. [Pg.350]

Feed-back models can be constructed and trained. In a constructed model, the weight matrix is created by adding the output product of every input pattern vector with itself or with an associated input. After construction, a partial or inaccurate input pattern can be presented to the network and, after a time, the network converges to one of the original input patterns. Hopfield and BAM are two well-known constructed feed-back models. [Pg.4]

Chapter 10 covers another important field with a great overlap with CA neural networks. Beginning with a short historical survey of what is really an independent field, chapter 10 discusses the Hopfield model, stochastic nets, Boltzman machines, and multi-layered perceptrons. [Pg.19]

While, as mentioned at the close of the last section, it took more than 15 years following Minsky and Papert s criticism of simple perceptrons for a bona-fide multilayered variant to finally emerge (see Multi-layeved Perceptrons below), the man most responsible for bringing respectability back to neural net research was the physicist John J, Hopfield, with the publication of his landmark 1982 paper entitled Neural networks and physical systems with emergent collective computational abilities [hopf82]. To set the stage for our discussion of Hopfield nets, we first pause to introduce the notion of associative memory. [Pg.518]

Hopfield s neural net model addressed the basic associative memory problem [hopf82] Given some set of patterns Vi, construct a neural net such that when it is presented with an arbitrary pattern V, not necessarily an element of the given set, it responds with a pattern from the given set that most closely resembles" the presented pattern. [Pg.518]

Figure 10.4 shows a schematic representation of how Hopfield s net effectively partitions the phase space into disjoint basins of attraction, the attractor states of which represent some desired set of stored patterns. [Pg.518]

Fig. 10.4 Basins of attraction in the partitioned phcise space of a Hopfield neural net. Fig. 10.4 Basins of attraction in the partitioned phcise space of a Hopfield neural net.
Hopfield s model consists of a fnlly-coimected, symmetrically-weighted wij = Wji) McCulloch- Pitts neural net where the value of the neuron is updated according to ... [Pg.520]

As mentioned above, Hopfield s original approach to this problem was to introduce an energy function reminiscent of a spin-glass Hamiltonian ... [Pg.521]

Finally, if there are a large number of stored patterns, there may enough mutual interference to give rise to local minima, or inetastable states - sometimes also called spin glass states to emphasize the similarity between the formalisms of Hopfield nets and spin glasses - that are not correlated with any subset of the set of stored patterns [amitSSb]. [Pg.524]

In order to make a crude estimate of a Hopfield net s pattern storage capacity, consider first what it means for a given pattern to be stable. From equation 10.7, we see that a pattern is stable if... [Pg.525]

Figure 10.6 shows an example of how a Hopfield net that hc been trained to store the ten digits 0,1,..., 9 (encoded on a 5 x 9 grid consisting of the 45 neurons making up the net), can be used to retrieve the digit 6 from noisy input. [Pg.527]

Fig. 10.6 A sample evolution of a noisy digit-6 input by a Hopfield net trained to remember the 10 basic digits. Fig. 10.6 A sample evolution of a noisy digit-6 input by a Hopfield net trained to remember the 10 basic digits.
In the previous section we discussed how a Hopfield net can sometimes converge to a local minimum that docs not correspond to any of the desired stored patterns. The problem is that while the dynamics embodied by equation 10.7 steadily decreases the net s energy (equation 10.9), because of the general bumpiness of the energy landscape (see figure 10.5), whether or not such a steady decrease eventually lands the system at one of the desired minima depends entirely on where the system begins its descent, or on its initial state. There is certainly no general assurance that the system will evolve towards the desired minimum. [Pg.528]

T for the Hopfield net and replacing the deterministic threshold dynamics with a stochastic rule [hinton83] ... [Pg.529]

The form of the stochastic transfer function p x) is shown in figure 10.7. Notice that the steepness of the function near a - 0 depends entirely on T. Notice also that this form approaches that of a simple threshold function as T —> 0, so that the deterministic Hopfield net may be recovered by taking the zero temperature limit of the stochastic system. While there are a variety of different forms for p x) satisfying this desired limiting property, any of which could also have been chosen, this sigmoid function is convenient because it allows us to analyze the system with tools borrowed from statistical mechanics. [Pg.529]

Applying exactly the same reasoning to our stochastic net, but using equation 10.9 for the Hopfield energy in place of the Ising Hamiltonian, we obtain the analogous expression... [Pg.531]

The Boltzman Machine generalizes the Hopfield model in two ways (1) like the simple stochastic variant discussed above, it t(>o substitutes a stochastic update rule for Hopfield s deterministic dynamics, and (2) it separates the neurons in the net into sets of visible and hidden units. Figure 10.8 shows a Boltzman Machine in which the visible neurons have been further subdividetl into input and output sets. [Pg.532]

Fig. 10.8 A division of a set of neurons into input, output and hidden units in a Boltzman Machine. (For clarity, not all of the links between neurons are shown ju.st a-s for Hopfield nets, in general, Wij 0 for all i j. Fig. 10.8 A division of a set of neurons into input, output and hidden units in a Boltzman Machine. (For clarity, not all of the links between neurons are shown ju.st a-s for Hopfield nets, in general, Wij 0 for all i j.
Equation 10.49 embodies Hinton, et.al. s Boltzman Machine learning scheme. Notice that it consists of two different parts. The first part, < SiSj >ciamped) is essentially the same as the Hebb rule used in Hopfield s net (equation 10.19), and reinforces the connections that lead from input to output. The second part, < SiSj >free> Can be likened to a Hebbian unlearning, whereby poor associations are effectively unlearned. [Pg.535]

The basic backpiopagation algorithm described above is, in practice, often very slow to converge. Moreover, just as Hopfield nets can sometimes get stuck in undesired spurious attractor states (i.e. local minima see section 10.6.5), so to can multilayer perceptrons get trapped in some undesired local minimum state. This is an unfortunate artifact that plagues all energy (or cost-function) minimization schemes. [Pg.544]

Ef, as long as it is differentiable and is minimised by Of = Sf. One interesting form that has a natural interpretation in terms of learning the probabilities of a set of hypotheses represented by the output neurons, has recently been suggested by Hopfield [hopf87] and Banm and Wilczek [baumSSb] ... [Pg.546]

In our discussion of Hopfield nets in section 10.6, we found that the maximal number of patterns that can be stored before their stability is impaired is some linear function of the size of the net n, ax aN, where 0 < a < 1 and N is the number of neurons in the net (see sections 10.6.6 and 10.7). A similar question can of course be asked of perceptroiis How many input-output fact pairs can a perceptron of given size learn ... [Pg.550]


See other pages where Hopfield is mentioned: [Pg.275]    [Pg.509]    [Pg.509]    [Pg.512]    [Pg.518]    [Pg.519]    [Pg.519]    [Pg.519]    [Pg.520]    [Pg.521]    [Pg.522]    [Pg.523]    [Pg.524]    [Pg.524]    [Pg.525]    [Pg.525]    [Pg.526]    [Pg.526]    [Pg.526]    [Pg.527]    [Pg.528]    [Pg.528]    [Pg.528]    [Pg.532]    [Pg.532]    [Pg.545]   
See also in sourсe #XX -- [ Pg.115 ]

See also in sourсe #XX -- [ Pg.105 , Pg.106 ]

See also in sourсe #XX -- [ Pg.188 ]




SEARCH



Agmon-Hopfield reaction coordinate

Associative Memories Hopfield Net

Hopfield expression

Hopfield networks

Hopfield-Ninio scheme

Lyman-Birge-Hopfield bands

Neural networks Hopfield

Neural networks Hopfield model

The Hopfield Model

© 2024 chempedia.info