Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Recurrent networks

Recurrent networks are based on the work of Hopfield and contain feedback paths. Figure 10.23 shows a single-layer fully-connected recurrent network with a delay (z l in the feedback path. [Pg.350]

The networks are also specified by sets of coefficients called weights and by a subset of p output processors that are used to communicate the outputs of the network to the environment. These studies consider a uniform model in which the structure of the networks and the values of the weights do not change in time. The outputs of each processor are the only parameters that change with time. Siegelmann et al. [152-155] show that the computational power of these uniform recurrent networks depends on the classes of numbers utilized as weights ... [Pg.132]

Now we can look at the biochemical networks developed in this work and compare them to the recurrent networks discussed above. Network A (Section 4.2.1) and Network C (Section 4.2.3) are fully connected to one another, and the information flows back and forth from each neuron to all the others. This situation is very much hke the one described for recurrent neural networks, and in these cases, memory, which is a necessary to demonstrate computational power, is clearly incorporated in the networks. Network B (Section 4.2.2) is a feedforward network and thus appears to have no memory in this form. However, when we examine the processes taking place in the biochemical neuron more carefully, we can see that the enzymic reactions take into account the concentration of the relevant substrates present in the system. These substrates can be fed as inputs at any time t. However, part of them also remained from the reactions that took place at time t — and thus the enzymic system in every form is influenced by the processes that took place at early stages. Hence, memory is always incorporated. [Pg.132]

This activation function is much more complicated then the saturated linear function used in recurrent neural networks [152-155] and is actually established by the biochemical system. According to Siegelmann [154], use of a complicated activation function does not increase the computational power of the network. [Pg.133]

The recurrent network models assume that the structure of the network, as well as the values of the weights, do not change in time. Moreover, only the activation values (i.e., the output of each processor that is used in the next iteration) changes in time. In the biochemical network one cannot separate outputs and weights. The outputs of one biochemical neurons are time dependent and enter the following biochemical neurons as they are. However, the coefficients involved in these biochemical processes are the kinetic constants that appear in the rate equations, and these constants are real numbers. The inputs considered in biochemical networks are continuous analog numbers that change over time. The inputs to the recurrent neural networks are sets of binary numbers. [Pg.133]

There are many similarities between recurrent neural networks and the biochemical networks presented in this work. However, the dissimilarities reviewed here are very closely related to the inherent characteristics of biochemical systems, such as their kinetic equations. Thus, in future work one may consider recurrent neural networks that are similar to biochemical networks— having the same activation function and the same connections between neurons—and this approach will allow one to assess their computational capabilities. [Pg.133]

The treatment of HP has become increasingly difficult due to the frequency of antibiotic resistance and recurrence after successful treatment. In Peru, the recurrence rate of the infection is as high as 73% even after successful eradication. In this instance, recurrence is not attributed to antibiotic resistance but to re-infection of patients. In the United States, resistant HP is also of concern. The Helicobacter pylori Antimicrobial Resistance Monitoring Program (HARP) is a multicenter US network that tracks HP patterns of resistance. In 2004, HARP reported that 34% of 347 HP isolates tested were resistant to one or more antibiotics commonly used to treat HP infections.In the US, most antibiotic resistance is associated with metronidazole and clarithromycin, both standard treatment options for HP. Thus, antibiotic resistance and high re-infection rates strongly argue for the development of new therapeutic modalities to prevent and treat HP infections worldwide. [Pg.477]

Whittington MA, Traub RD, Jeffreys JG (1995) Synchronized oscillations in interneuron network driven by metabotropic glutamate receptor activation. Natime 373 612-615 Whittington MA, Traub RD, Faulkner HJ, Stanford IM, Jeffreys JG (1997) Recurrent excitatory postsynaptic potentials induced by synchronized fast cortical oscillations. Proc Natl Acad Sci U S A 94 12 198-12 203... [Pg.247]

A. Blanco, M. Delgado and M. C. Pegalajar, A genetic algorithm to obtain the optimal recurrent neural network, Int. J. Approx. Reasoning, 23(1), 2000, 61 83. [Pg.277]

Fig. 10.8 (a) Example of common neural net (perceptron) architecture. Here one hidden layer Neural Networks (NNs) is shown (Hierlemann et al., 1996). (b) A more sophisticated recurrent neural network utilizing adjustable feedback through recurrent variables, (c) Time-delayed neural network in which time has been utilized as an experimental variable... [Pg.326]

Saha S, Raghava G (2006) Prediction of continuous B-cell epitopes in an antigen using recurrent neural network. Proteins 65 40-48... [Pg.137]

An adaptation of the simple feed-forward network that has been used successfully to model time dependencies is the so-called recurrent neural network. Here, an additional layer (referred to as the context layer) is added. In effect, this means that there is an additional connection from the hidden layer neuron to itself. Each time a data pattern is presented to the network, the neuron computes its output function just as it does in a simple MLP. However, its input now contains a term that reflects the state of the network before the data pattern was seen. Therefore, for subsequent data patterns, the hidden and output nodes will depend on everything the network has seen so far. For recurrent neural networks, therefore, the network behaviour is based on its history. [Pg.2401]

It is now well known that the artificial neural networks (ANNs) are nonlinear tools well suited to find complex relationships among large data sets [43], Basically an ANN consists of processing elements (i.e., neurons) organized in different oriented groups (i.e., layers). The arrangement of neurons and their interconnections can have an important impact on the modeling capabilities of the ANNs. Data can flow between the neurons in these layers in different ways. In feedforward networks no loops occur, whereas in recurrent networks feedback connections are found [79,80],... [Pg.663]


See other pages where Recurrent networks is mentioned: [Pg.509]    [Pg.350]    [Pg.351]    [Pg.226]    [Pg.334]    [Pg.258]    [Pg.445]    [Pg.46]    [Pg.315]    [Pg.411]    [Pg.97]    [Pg.633]    [Pg.131]    [Pg.132]    [Pg.133]    [Pg.210]    [Pg.268]    [Pg.58]    [Pg.483]    [Pg.151]    [Pg.67]    [Pg.452]    [Pg.683]    [Pg.140]    [Pg.223]    [Pg.336]    [Pg.222]    [Pg.838]    [Pg.2401]    [Pg.2408]    [Pg.2411]    [Pg.161]    [Pg.21]    [Pg.325]    [Pg.342]   
See also in sourсe #XX -- [ Pg.26 , Pg.46 ]




SEARCH



Recurrence

Recurrent neural network

© 2024 chempedia.info