Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural networks Feedback

In analytical chemistry, Artificial Neural Networks (ANN) are mostly used for calibration, see Sect. 6.5, and classification problems. On the other hand, feedback networks are usefully to apply for optimization problems, especially nets ofHoPFiELD type (Hopfield [1982] Lee and Sheu [1990]). [Pg.146]

Fig. 10.8 (a) Example of common neural net (perceptron) architecture. Here one hidden layer Neural Networks (NNs) is shown (Hierlemann et al., 1996). (b) A more sophisticated recurrent neural network utilizing adjustable feedback through recurrent variables, (c) Time-delayed neural network in which time has been utilized as an experimental variable... [Pg.326]

However, feedback linearizing control requires the knowledge of an accurate model of the process. Hence, in the presence of parametric model uncertainties, adaptive or robust control strategies have been proposed [4, 10, 18, 30] in [47], model uncertainties are tackled by adopting an Artificial Neural Network (ANN) in conjunction with different linearizing control strategies. [Pg.96]

Milton, J., VanDerHeiden, U., Longtin, A., and Mackey, M., Complex dynamics and noise in simple neural networks with delayed mixed feedback, Biomedica Biochimica Acta, Vol. 49, No. 8-9, 1990, pp. 697-707. [Pg.420]

There are literally dozens of kinds of neural network architectures in use. A simple taxonomy divides them into two types based on learning algorithms (supervised, unsupervised) and into subtypes based upon whether they are feed-forward or feedback type networks. In this chapter, two other commonly used architectures, radial basis functions and Kohonen self-organizing architectures, will be discussed. Additionally, variants of multilayer perceptrons that have enhanced statistical properties will be presented. [Pg.41]

State feedback control is commonly used in control systems, due to its simple structure and powerful functions. Data-driven methods such as neural networks are useful only for situations with fully measured state variables. For this system in which state variables are not measurable and measurement function is nonlinear, we are dependant on system model for state estimation. On the other hand, as shown in figure 2, in open-loop situations, system has limit cycle behavior and measurements do not give any information of system dynamics. Therefore, we use model-based approach. [Pg.384]

It is now well known that the artificial neural networks (ANNs) are nonlinear tools well suited to find complex relationships among large data sets [43], Basically an ANN consists of processing elements (i.e., neurons) organized in different oriented groups (i.e., layers). The arrangement of neurons and their interconnections can have an important impact on the modeling capabilities of the ANNs. Data can flow between the neurons in these layers in different ways. In feedforward networks no loops occur, whereas in recurrent networks feedback connections are found [79,80],... [Pg.663]

Our initial studies of dynamics in biochemical networks included spatially localized components [32]. As a consequence, there will be delays involved in the transport between the nuclear and cytoplasmic compartments. Depending on the spatial structure, different dynamical behaviors could be faciliated, but the theoretical methods are useful to help understand the qualitative features. In other (unpublished) work, computations were carried out in feedback loops with cyclic attractors in which a delay was introduced in one of the interactions. Although the delay led to an increase of the period, the patterns of oscillation remained the same. However, delays in differential equations that model neural networks and biological control systems can introduce novel dynamics that are not present without a delay (for example, see Refs. 57 and 58). [Pg.174]

Miyamoto, H., Kawato, M., Setoyama, T., and Suzuki, R. 1988. Feedback-error-learning neural network for trajectory control of a robotic manipulator. Neural Networks, 1 251-265. [Pg.200]

Feedback error learning (FEL) is a hybrid technique [113] using the mapping to replace the estimation of parameters within the feedback loop in a closed-loop control scheme. FEL is a feed-forward neural network structure, under training, learning the inverse dynamics of the controlled object. This method is based on contemporary physiological studies of the human cortex [114], and is shown in Figure 15.6. [Pg.243]

The total control effort u appHed to the plant is the sum of the feedback control output and network control output. The ideal configuration of the neural network would correspond to the inverse mathematical model of the system s plant. The network is given information of the desired position and its derivatives, and it will calculate the control effort necessary to make the output of the system foUow the desired trajectory. If there are no disturbances the system error will be zero. [Pg.243]

In essence, the output of the feedback controller is an indication of the mismatch between the dynamics of the plant and the inverse-dynamics model obtained by the neural network. If the true inverse-dynamic model has been learned, the neural network alone wiU provide the necessary control signal to achieve the desired trajectory [118,120],... [Pg.245]

Kawato, M., Feedback-error-leaming neural network for supervised motor learning, Adv. Neural Comput 365-372,1990. [Pg.250]

Roa, D.H., Bitner, D., and Gupta, M.M., Feedback-error learning scheme using recurrent neural networks for nonlinear dynamic systems, Proc. IEEE 21-38,1994. [Pg.251]

Control Complex neural networks with multiple sensory input Computer systems with limited sensory feedback... [Pg.342]

It is desirable to keep the variance of the resource occupancy less. The variance can be brought down with the increase ofthe differential feedback order. The use of a differentially fed artificial neural network in Web traffic shaping has been explained at length in Manjunath (2006). As the order of differential feedback increases, the error reduces. [Pg.256]


See other pages where Neural networks Feedback is mentioned: [Pg.509]    [Pg.690]    [Pg.20]    [Pg.2]    [Pg.75]    [Pg.159]    [Pg.154]    [Pg.169]    [Pg.224]    [Pg.397]    [Pg.423]    [Pg.170]    [Pg.336]    [Pg.334]    [Pg.65]    [Pg.68]    [Pg.5]    [Pg.599]    [Pg.513]    [Pg.263]    [Pg.273]    [Pg.56]    [Pg.194]    [Pg.195]    [Pg.244]    [Pg.343]    [Pg.404]    [Pg.255]    [Pg.255]    [Pg.257]   
See also in sourсe #XX -- [ Pg.40 ]

See also in sourсe #XX -- [ Pg.335 ]




SEARCH



Feedback networks

Neural network

Neural networking

© 2024 chempedia.info