Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Recurrent Neural Networks

Now we can look at the biochemical networks developed in this work and compare them to the recurrent networks discussed above. Network A (Section 4.2.1) and Network C (Section 4.2.3) are fully connected to one another, and the information flows back and forth from each neuron to all the others. This situation is very much hke the one described for recurrent neural networks, and in these cases, memory, which is a necessary to demonstrate computational power, is clearly incorporated in the networks. Network B (Section 4.2.2) is a feedforward network and thus appears to have no memory in this form. However, when we examine the processes taking place in the biochemical neuron more carefully, we can see that the enzymic reactions take into account the concentration of the relevant substrates present in the system. These substrates can be fed as inputs at any time t. However, part of them also remained from the reactions that took place at time t — and thus the enzymic system in every form is influenced by the processes that took place at early stages. Hence, memory is always incorporated. [Pg.132]

This activation function is much more complicated then the saturated linear function used in recurrent neural networks [152-155] and is actually established by the biochemical system. According to Siegelmann [154], use of a complicated activation function does not increase the computational power of the network. [Pg.133]

The recurrent network models assume that the structure of the network, as well as the values of the weights, do not change in time. Moreover, only the activation values (i.e., the output of each processor that is used in the next iteration) changes in time. In the biochemical network one cannot separate outputs and weights. The outputs of one biochemical neurons are time dependent and enter the following biochemical neurons as they are. However, the coefficients involved in these biochemical processes are the kinetic constants that appear in the rate equations, and these constants are real numbers. The inputs considered in biochemical networks are continuous analog numbers that change over time. The inputs to the recurrent neural networks are sets of binary numbers. [Pg.133]

There are many similarities between recurrent neural networks and the biochemical networks presented in this work. However, the dissimilarities reviewed here are very closely related to the inherent characteristics of biochemical systems, such as their kinetic equations. Thus, in future work one may consider recurrent neural networks that are similar to biochemical networks— having the same activation function and the same connections between neurons—and this approach will allow one to assess their computational capabilities. [Pg.133]

A. Blanco, M. Delgado and M. C. Pegalajar, A genetic algorithm to obtain the optimal recurrent neural network, Int. J. Approx. Reasoning, 23(1), 2000, 61 83. [Pg.277]

Fig. 10.8 (a) Example of common neural net (perceptron) architecture. Here one hidden layer Neural Networks (NNs) is shown (Hierlemann et al., 1996). (b) A more sophisticated recurrent neural network utilizing adjustable feedback through recurrent variables, (c) Time-delayed neural network in which time has been utilized as an experimental variable... [Pg.326]

Saha S, Raghava G (2006) Prediction of continuous B-cell epitopes in an antigen using recurrent neural network. Proteins 65 40-48... [Pg.137]

An adaptation of the simple feed-forward network that has been used successfully to model time dependencies is the so-called recurrent neural network. Here, an additional layer (referred to as the context layer) is added. In effect, this means that there is an additional connection from the hidden layer neuron to itself. Each time a data pattern is presented to the network, the neuron computes its output function just as it does in a simple MLP. However, its input now contains a term that reflects the state of the network before the data pattern was seen. Therefore, for subsequent data patterns, the hidden and output nodes will depend on everything the network has seen so far. For recurrent neural networks, therefore, the network behaviour is based on its history. [Pg.2401]

A Vasilache, B Dahhou, G Roux, and G Goma. Classification of fermentation process models using recurrent neural networks. Int. J. Systems Science, 32(9) 1139-1154, 2001. [Pg.300]

Y You and M Nikolaou. Dynamic process modeling with recurrent neural networks. AIChE J., 39(10) 1654-1667, 1993. [Pg.303]

Fig. 5. Basic anatomy of (a) a three-layer feedforward perceptron and (b) a state recurrent neural network. Fig. 5. Basic anatomy of (a) a three-layer feedforward perceptron and (b) a state recurrent neural network.
Williams, R.J. and Zipser, D. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural Comput., 1 270-280. [Pg.201]

Roa, D.H., Bitner, D., and Gupta, M.M., Feedback-error learning scheme using recurrent neural networks for nonlinear dynamic systems, Proc. IEEE 21-38,1994. [Pg.251]

Caxlini, F., Zio, E. Pedroni, N. 2007. Dynamic system modelling by locally recurrent neural networks. Proceedings of International conference on safety and reliability ESREL 07, Stavanger. NonvayWoX.X 395-403. [Pg.1934]

Pollastri, G. Przybylski, D. Rost, B. Baldi, P. (2002). Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles. Proteins, Vol. 47, pp. 228-235. [Pg.137]

Compression of Ultraviolet-Visible Spectrum with Recurrent Neural Network. [Pg.326]

Recurrent Neural Networks Fuzzy Systems Design Example Genetic Algorithms... [Pg.2005]

In contrast to feedforward neural networks, with recurrent networks neuron outputs can be connected with their inputs. Thus, signals in the network can continuously circulate. Until recently, only a limited number of recurrent neural networks were described. [Pg.2054]

Since dissolution is a time-dependent process, the dynamic (recurrent) neural networks are the more appropriate tool for modeling this process. Consequently, the main conclusion of this approach [19] is the superiority of Elman neural networks in drug release prediction in comparison to static neural networks. [Pg.357]

Y. Tian, J. Zhang, and J. Morris, Modeling and optimal control of a batch polymerization reactor using a hybrid stacked recurrent neural network model, Ind. Eng. Chem. Res., 40, 4525-4535, 2001. [Pg.361]

Algorithm to Obtain the Optimal Recurrent Neural Network. [Pg.327]

Ressom, H. W., Y. Zhang, et al. (2006). Inferring network interactions using recurrent neural networks and swarm intelligence. Conf Proc IEEE Eng Med Biol Soc (EMBC 2006), New York City, New York, USA. [Pg.241]

Black box models or empirical models do not describe the physical phenomena of the process, they are based on input/output data and only describe the relationship between the measured input and output data of the process. These models are useful when limited time is available for model development and/or when there is insufficient physical understanding of the process. Mathematical representations include time series models (such as ARMA, ARX, Box and Jenkins models, recurrent neural network models, recurrent fuzzy models). [Pg.20]

Artificial neural networks consist of different types of layers. There is the input-layer, one or more hidden layers and an output layer. All these layers can consist of one or more neurons. A neuron in a particular layer is connected to all neurons in the next layer, which is why this is called a feed-forward network. In other networks the neurons might be connected otherwise. An example of a different network is a recurrent neural network where there are also links that connect neurons to other neurons in a previous layer. A fully connected network is a network in which all the neurons from one layer are connected to all neurons in the next layer. [Pg.361]

The learning or training of a feed-forward neural network is different from a recurrent neural network. For the feed-forward neural network, the back propagation algorithm, as discussed in section 27.3, can be written as ... [Pg.369]

Hammer, B. (2000). Learning with Recurrent Neural Networks, Springer, Berlin, 149 p. [Pg.110]

Feedback Dynamics. In assuming the counteractive rule, the optical feedback system automatically updates the illumination pattern. The feedback is implemented with a discrete form of recurrent neural network dynamics known as Hopfield-Tank model [8] yi t + At) = 1 — where Wij is the... [Pg.46]

Hagan, M. T., O. De Jesus and R. Schultz. Training recurrent networks for filtering and control, Chapter 12. In Recurrent Neural Networks Design and Applications, Editors L. Medsker and L. C. Jain. CRC Press, Boca Raton, FL, pp. 311-340,1999. [Pg.572]


See other pages where Recurrent Neural Networks is mentioned: [Pg.351]    [Pg.131]    [Pg.133]    [Pg.2401]    [Pg.2408]    [Pg.2411]    [Pg.423]    [Pg.2054]    [Pg.355]    [Pg.220]    [Pg.367]    [Pg.367]    [Pg.369]    [Pg.88]    [Pg.46]    [Pg.39]   
See also in sourсe #XX -- [ Pg.2401 ]

See also in sourсe #XX -- [ Pg.16 ]




SEARCH



Network recurrent

Neural network

Neural networking

Recurrence

© 2024 chempedia.info