Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Memory neural network

Optical Storage and Retrieval Memory, Neural Networks, and Fractals, edited by Francis T. S. Yu and Suganda Jutamulia... [Pg.687]

S. Bicciato M. Pandin G. Didone C. D. Bello, In Analysis of an Associative Memory Neural Network for Pattern Identification in Gene Expression Data, Proceedings of 1st Workshop on Data Mining in Bioinformatics, M. J. Zaki, H. T. T. Toivonen, J. T. L. Wang, Eds. San Francisco, CA, USA, pp 22-30. [Pg.591]

Optical Storage and Retrieval Memory, Neural Networks,... [Pg.624]

Carpenter, G. 1989. Neural network models for pattern recognition and associative memory. Neural Networks, 2 243-258. [Pg.199]

Anderson, J.A. (1972) A Simple Neural Network Generating an Interactive Memory, Mathematical Biosciences, 14, pp. 197-220. [Pg.428]

The second main category of neural networks is the feedforward type. In this type of network, the signals go in only one direction there are no loops in the system as shown in Fig. 3. The earliest neural network models were linear feed forward. In 1972, two simultaneous articles independently proposed the same model for an associative memory, the linear associator. J. A. Anderson [17], neurophysiologist, and Teuvo Kohonen [18], an electrical engineer, were unaware of each other s work. Today, the most commonly used neural networks are nonlinear feed-forward models. [Pg.4]

In previous chapters, we have examined a variety of generalized CA models, including reversible CA, coupled-map lattices, reaction-diffusion models, random Boolean networks, structurally dynamic CA and lattice gases. This chapter covers an important field that overlaps with CA neural networks. Beginning with a short historical survey, chapter 10 discusses zissociative memory and the Hopfield model, stocheistic nets, Boltzman machines, and multi-layered perceptrons. [Pg.507]

While, as mentioned at the close of the last section, it took more than 15 years following Minsky and Papert s criticism of simple perceptrons for a bona-fide multilayered variant to finally emerge (see Multi-layeved Perceptrons below), the man most responsible for bringing respectability back to neural net research was the physicist John J, Hopfield, with the publication of his landmark 1982 paper entitled Neural networks and physical systems with emergent collective computational abilities [hopf82]. To set the stage for our discussion of Hopfield nets, we first pause to introduce the notion of associative memory. [Pg.518]

Iit78] Little, W.A and G.L.Shaw, Analytic study of the memory storage capacity of a neural network, Math. Bios. 39 (1978) 281-290. [Pg.774]

T. Kohonen, Self Organization and Associated Memory. Springer-Verlag, Heidelberg, 1989. W.J. Meissen, J.R.M. Smits, L.M.C. Buydens and G. Kateman, Using artificial neural networks for solving chemical problems. II. Kohonen self-organizing feature maps and Hopfield networks. Chemom. Intell. Lab. Syst., 23 (1994) 267-291. [Pg.698]

Fuster, J. M. Memory in the Cerebral Cortex. An Empirical Approach to Neural Networks in the Human and Nonhuman Primate. Cambridge, MA The MIT Press, 1994. [Pg.873]

We recall that AI tools need a memory—where is it in the neural network There is an additional feature of the network to which we have not yet been introduced. The signal output by a neuron in one layer is multiplied by a connection weight (Figure 10) before being passed to the next neuron, and it is these connection weights that form the memory of the network. [Pg.370]

Neural network model composed of formal neurons without the capacity of memory storage cannot be applicable to the study of information processing of real neural networks. [Pg.13]

Now we can look at the biochemical networks developed in this work and compare them to the recurrent networks discussed above. Network A (Section 4.2.1) and Network C (Section 4.2.3) are fully connected to one another, and the information flows back and forth from each neuron to all the others. This situation is very much hke the one described for recurrent neural networks, and in these cases, memory, which is a necessary to demonstrate computational power, is clearly incorporated in the networks. Network B (Section 4.2.2) is a feedforward network and thus appears to have no memory in this form. However, when we examine the processes taking place in the biochemical neuron more carefully, we can see that the enzymic reactions take into account the concentration of the relevant substrates present in the system. These substrates can be fed as inputs at any time t. However, part of them also remained from the reactions that took place at time t — and thus the enzymic system in every form is influenced by the processes that took place at early stages. Hence, memory is always incorporated. [Pg.132]

A state of consciousness depends on the intact function of the complex neural networks that underlie alertness, learning and memory. General anesthetics appear to interrupt synaptic transmission within these systems. Multiple ion channels and receptors that mediate and modulate synaptic transmission are putative targets for general anesthetics. All general anesthetics are not alike in the way they alter consciousness. For example, ketamine induces a state of... [Pg.158]

His Web page goes on to say, The effect these virtual machines are based upon is exceedingly simple and straightforwardly controllable A normal neural network that has been exposed to any knowledge domain and then repeatedly subjected to mild internal disturbances, tends to produce a mixture of both intact memories and unusual juxtapositions of... [Pg.69]

Schierle and Otto [63] used a two-layer perceptron with error back-propagation for quantitative analysis in ICP-AES. Also, Schierle et al. [64] used a simple neural network [the bidirectional associative memory (BAM)] for qualitative and semiquantitative analysis in ICP-AES. [Pg.272]

Lisman, J.E., Talamini, L.M., and Raffone, A. Recall of memory sequences by interaction of the dentate and CA3 A revised model of the phase precession. Neural Networks, in press, corrected proof... [Pg.247]

There are other applications of photorefractive materials that have been investigated, including associative optical memories that identify a clear image from a corrupted input [1], novelty filters to detect only changing features in an image [2], and neural networking in analogy with the human brain [3],... [Pg.3645]


See other pages where Memory neural network is mentioned: [Pg.5]    [Pg.779]    [Pg.15]    [Pg.262]    [Pg.131]    [Pg.95]    [Pg.120]    [Pg.178]    [Pg.67]    [Pg.79]    [Pg.101]    [Pg.97]    [Pg.149]    [Pg.541]    [Pg.207]    [Pg.325]    [Pg.420]    [Pg.423]    [Pg.120]    [Pg.217]    [Pg.356]    [Pg.97]    [Pg.3268]    [Pg.3268]    [Pg.3268]   
See also in sourсe #XX -- [ Pg.15 , Pg.46 ]




SEARCH



Memory, neural network models

Neural Network Model for Memory

Neural network

Neural network associative memory

Neural networking

Neural networks Bidirectional associative memory

© 2024 chempedia.info