Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hopfield network

Applications of Hopfield networks are limited. One interesting infrared spectral interpretation study used a Hopfield network in a feedback loop to a backpropagation ANN, causing it to train more quickly.  [Pg.97]


T. Kohonen, Self Organization and Associated Memory. Springer-Verlag, Heidelberg, 1989. W.J. Meissen, J.R.M. Smits, L.M.C. Buydens and G. Kateman, Using artificial neural networks for solving chemical problems. II. Kohonen self-organizing feature maps and Hopfield networks. Chemom. Intell. Lab. Syst., 23 (1994) 267-291. [Pg.698]

Some historically important artificial neural networks are Hopfield Networks, Per-ceptron Networks and Adaline Networks, while the most well-known are Backpropa-gation Artificial Neural Networks (BP-ANN), Kohonen Networks (K-ANN, or Self-Organizing Maps, SOM), Radial Basis Function Networks (RBFN), Probabilistic Neural Networks (PNN), Generalized Regression Neural Networks (GRNN), Learning Vector Quantization Networks (LVQ), and Adaptive Bidirectional Associative Memory (ABAM). [Pg.59]

A brief but concise description of the Hopfield Network may be found in Crick, The Astonishing Hypothesis, ibid., ppl82-185. A more technical and thorough exposition is found in Churchland Sejnowski, The Computational Brain 1993 MIT Press, p82ff. (back)... [Pg.163]

H. Kono, J. Doi. A new method for side-chain conformation prediction using a Hopfield network and reproduced rotamers. J. Comp. Chem. 1996, 17, 1667-1683. [Pg.242]

Among the many network topologies and learning algorithms, the Hopfield network and the multilayer perceptron are preferred by several researchers for scheduling problems. Therefore, in the following sections, these two networks wUl be briefly discussed, and the related works on scheduling problems will be reviewed. [Pg.1778]

Zhou et al. (1991) modified this approach by using a linear cost function emd concluded that this modification not only produced better results but also reduced network complexity. Other works related to using the Hopfield network for the scheduling problem include Zhang et ed. (1991) and Arizono et al. (1992). [Pg.1778]

An Analysis of the Local Optima Storage Capacity of Hopfield Network Based Fitness Function Models... [Pg.248]

The paper is organised as follows. Sections 2, 3 and 4 introduce ED As, Walsh functions and Hopfield networks respectively. Sections 5 and 6 describe a Hopfield EDA (HEDA) and presents two learning rules one based on a standard Hebbian update and one designed to improve network capacity. Section 7 describes a set of experiments and an analysis of network size, capacity and the time taken during learning. Sections analyses the weights of a HEDA and Sect.9 offers some conclusions and discusses future work. [Pg.251]

Hopfield networks [18] are able to store patterns as point attractors in n dimensional binary space and recall them in response to partial or degraded versions of stored patterns. For this reason, they are known as content addressable memories where each memory is a point attractor for nearby, similar patterns. Traditionally, known patterns are loaded directly into the network (see the learning rule 10 below), but in this paper we investigate the use of a Hopfield network to discover point attractors by sampling from a fitness function. A Hopfield network is a neural network consisting of n simple connected processing units. The values the units take are represented by a vector, u ... [Pg.255]

Storkey [22] introduced a new learning rule for Hopfield networks that increased the capacity of a network compared to using the Hebbian rule. The new weight update rule is ... [Pg.258]

This set of experiments compares the capacity of a normally trained Hopfield network with the search capacity of a HEDA. We will compare two learning rules (Hebbian and Storkey). The experiments are repeated many times, all using randomly generated target patterns where each element has an equal probability of being +1 or —1. [Pg.260]

Experiment 1 compares Hebbian trained Hopfield networks with their equivalent HEDA models. The aim is to discover whether or not the HEDA model can achieve the capacity of the Hopfield network. Hopfield networks were trained on patterns using standard Hebbian learning, with one pattern at a time being added until the network s capacity was exceeded. At this point, the learned patterns were set to be the targets for the HEDA search using Eqs. 21 and 22 and the network s weights were reset. [Pg.260]

Results. Regardless of the capacity or size of the Hopfield network, the HEDA search was always able to discover every pattern learned during the capacity filling stage of the test. From this, we conclude that the capacity of a HEDA for... [Pg.260]

Fig. 6. The mean and inter-quartile range of the number of samples needed to find all local optima in a Hopfield network filled to capacity using the Storkey learning rnle, plotted against the number of patterns to find. Fig. 6. The mean and inter-quartile range of the number of samples needed to find all local optima in a Hopfield network filled to capacity using the Storkey learning rnle, plotted against the number of patterns to find.
Fig, 7. Histograms showing the frequency of the highest linkage order across 10,000 trials, organised by Hopfield network capacity. Networks are trained with the standard Hebbian rule. Networks with capacity greater than 5 require a number of units greater than that for which it is possible to run multiple Walsh decompositions. [Pg.265]

Hopfield networks have previously been used as optimisation tools but the weights have always been designed by hand. The contributions of this paper are twofold. It presents a method for automatically discovering the weight values for a Hop-field network from samples from a fitness function and an analysis of the capacity of such networks for storing local optima as attractor points. An analysis of linkage order and network capacity has shown that such second order networks can learn all of the attractor states of some higher order functions, even when they cannot reproduce the function output reliably. [Pg.270]

The three learning rules above are used in a Hopfield network, with Cj typically equal to 1. [Pg.82]

This rule can be used in a Hopfield network, in the middle layer of the BSB network, and in the outer layer of a counterpropagation network. In the latter case, it is equivalent to the so-called Grossberg outstar learning rule. Cj is usually set to 0.1 or less and C2 is usually set to zero. [Pg.83]


See other pages where Hopfield network is mentioned: [Pg.130]    [Pg.122]    [Pg.158]    [Pg.1768]    [Pg.1778]    [Pg.1778]    [Pg.1778]    [Pg.2735]    [Pg.250]    [Pg.255]    [Pg.255]    [Pg.256]    [Pg.257]    [Pg.257]    [Pg.260]    [Pg.261]    [Pg.262]    [Pg.262]    [Pg.263]    [Pg.263]    [Pg.265]    [Pg.73]    [Pg.94]    [Pg.97]    [Pg.97]    [Pg.110]    [Pg.2054]    [Pg.2054]   
See also in sourсe #XX -- [ Pg.73 , Pg.82 , Pg.86 , Pg.88 , Pg.94 , Pg.97 , Pg.110 ]




SEARCH



Hopfield

Neural networks Hopfield

Neural networks Hopfield model

© 2024 chempedia.info