Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Feed-forward networks

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

The structure of a neural network forms the basis for information storage and governs the learning process. The type of neural network used in this work is known as a feed-forward network the information flows only in the forward direction, i.e., from input to output in the testing mode. A general structure of a feed-forward network is shown in Fig. I. Connections are made be-... [Pg.2]

The neurons in both the hidden and output layers perform summing and nonlinear mapping functions. The functions carried out by each neuron are illustrated in Fig. 2. Each neuron occupies a particular position in a feed-forward network and accepts inputs only from the neurons in the preceding layer and sends its outputs to other neurons in the succeeding layer. The inputs from other nodes are first weighted and then summed. This summing of the weighted inputs is carried out by a processor within the neuron. The sum that is obtained is called the activation of the neuron. Each activated neu-... [Pg.3]

Current feed-forward network architectures work better than the current feed-back architectures for a number of reasons. First, the capacity of feed-back networks is unimpressive. Secondly, in the running mode, feed-forward models are faster, since they need to make one pass through the system to find a solution. In contrast, feed-back networks must cycle repetitively until... [Pg.4]

Many different types of networks have been developed. They all consist of small units, neurons, that are interconnected. The local behaviour of these units determines the overall behaviour of the network. The most common is the multi-layer-feed-forward network (MLF). Recently, other networks such as the Kohonen, radial basis function and ART networks have raised interest in the chemical application area. In this chapter we focus on the MLF networks. The principle of some of the other networks are explained and we also discuss how these networks relate with other algorithms, described elsewhere in this book. [Pg.649]

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

It is interesting to note that the data processing that occurs dnring the operation of a PCR model is jnst a special case of that of a ANN feed-forward network, where the inpnt weights (W) are the PC loadings (P, in Eqnation 12.19), the output weights (W2) are the y loadings (q, in Eqnation 12.30), and there is no nonlinear transfer function in the hidden layer [67]. [Pg.388]

Artificial neural networks (ANN) are computing tools made up of simple, interconnected processing elements called neurons. The neurons are arranged in layers. The feed-forward network consists of an input layer, one or more hidden layers, and an output layer. ANNs are known to be well suited for assimilating knowledge about complex processes if they are properly subjected to input-output patterns about the process. [Pg.36]

Derks et al. [70] employed ANNs to cancel out noise in ICP. The results of neural networks (an Adaline network and a multi-layer feed-forward network) were compared with the more conventional Kalman filter. [Pg.272]

J. R. M. Smits, W. J. Meissen, L. M. C. Buydens and G. Kateman, Using artificial neural networks for solving chemical problems. Part I multi-layer feed-forward networks, Chemom. Intell. Lab. Syst., 22(2), 1994, 165-189. [Pg.276]

Figure 8.17 shows a very specific case of a feed-forward network with four inputs, three hidden nodes, and one output. However, such networks can vary widely in their design. First of all, one can choose any number of inputs, hidden nodes, and number of outputs in the network. In addition, one can even choose to have more than one hidden layer in the network. Furthermore, it is common to perform scaling operations on both the inputs and the outputs, as this can enable more efficient training of the network. Finally, the transfer function used in the hidden layer (f) can vary widely as well. Many feed-forward networks use a non-linear function called the sigmoid function, defined as ... [Pg.265]

Figure 1.16. Schematic architecture of a three layer feed-forward network... Figure 1.16. Schematic architecture of a three layer feed-forward network...
The processing elements are typically arranged in layers one of the most commonly used arrangements is known as a back propagation, feed forward network as shown in Figure 7.8. In this network there is a layer of neurons for the input, one unit for each physicochemical descriptor. These neurons do no processing, but simply act as distributors of their inputs (the values of the variables for each compound) to the neurons in the next layer, the hidden layer. The input layer also includes a bias neuron that has a constant output of 1 and serves as a scaling device to ensure... [Pg.175]

The four experiments done previously with Rnp (= 0.5, 1, 3, 4) were used to train the neural network and the experiment with / exp = 2 was used to validate the system. Dynamic models of process-model mismatches for three state variables (i.e. X) of the system are considered here. They are the instant distillate composition (xD), accumulated distillate composition (xa) and the amount of distillate (Ha). The inputs and outputs of the network are as in Figure 12.2. A multilayered feed forward network, which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, is used in this work. The error between the actual mismatch (obtained from simulation and experiments) and that predicted by the network is used as the error signal to train the network as described earlier. [Pg.376]

Figure 13.9 Architecture of a fully connected feed-forward network. Formal neurons are drawn as circles, and weights are represented by lines connecting the neuron layers. Fan-out neurons are drawn in white, sigmoidal neurons in black. Figure 13.9 Architecture of a fully connected feed-forward network. Formal neurons are drawn as circles, and weights are represented by lines connecting the neuron layers. Fan-out neurons are drawn in white, sigmoidal neurons in black.
In the previous chapter a simple two-layer artificial neural network was illustrated. Such two-layer, feed-forward networks have an interesting history and are commonly called perceptrons. Similar networks with more than two layers are called multilayer perceptrons, often abbreviated as MLPs. In this chapter the development of perceptrons is sketched with a discussion of particular applications and limitations. Multilayer perceptron concepts are developed applications, limitations and extensions to other kinds of networks are discussed. [Pg.29]

Networks based on radial basis functions have been developed to address some of the problems encountered with training multilayer perceptrons radial basis functions are guaranteed to converge and training is much more rapid. Both are feed-forward networks with similar-looking diagrams and their applications are similar however, the principles of action of radial basis function networks and the way they are trained are quite different from multilayer perceptrons. [Pg.41]

Neural network architectures 2L/FF = two-layer, feed forward network (i.e., perceptron) 3L or 4L/FF = three- or four-layer, feed-forward network (i.e., multi-layer perceptron). [Pg.104]

An adaptation of the simple feed-forward network that has been used successfully to model time dependencies is the so-called recurrent neural network. Here, an additional layer (referred to as the context layer) is added. In effect, this means that there is an additional connection from the hidden layer neuron to itself. Each time a data pattern is presented to the network, the neuron computes its output function just as it does in a simple MLP. However, its input now contains a term that reflects the state of the network before the data pattern was seen. Therefore, for subsequent data patterns, the hidden and output nodes will depend on everything the network has seen so far. For recurrent neural networks, therefore, the network behaviour is based on its history. [Pg.2401]

This structure formed a feed-forward network (Fig. 7.4) (4). Input nodes in the hrst layer corresponded to the independent variables characterizing each observation taken directly from the parameters of the experimental design. The input information was transmitted to layer 2 where the data were processed. Layer 2 consisted of numerous hidden nodes that connected layer 1 to layer 3. Layer 3 consisted of the output nodes, which were the mobilities of the analytes. [Pg.175]

The application of neural feed-forward networks has proved effective in protein 2D structure prediction (see Chapter 6 of Volume I, (Rost et al.,... [Pg.430]

Nowotny, T., Huerta, R. Explaining Synchrony in feed-forward networks Are McCuUoch-Pitts neurons good enough Biol. Cybem. 89, 237-241 (2003)... [Pg.32]

The way that information flows in a feed-forward network classifies them as hierarchical systems. In such systems, members are categorized by levels, from lowest to highest and they can only communicate from low level to higher but not in the opposite direction. It is worth noting that in a MLP network, input layer neurons do not act as real neurons in the sense that they do not apply an activation function, they act as buffers instead and only distribute the signals coming from outside world to the first hidden layer neurons. [Pg.145]

K. Hosnik, M. Stinchcombe, and M. White, Universal approximation of an unknown mapping and its derivatives using multilayer feed-forward networks. Neural Net., 3 551-560 (1990). [Pg.63]

The first ANN classifier used in the proposed model was a two-layered feed-forward network trained with the BP. The network received 16 real values... [Pg.49]


See other pages where Feed-forward networks is mentioned: [Pg.4]    [Pg.387]    [Pg.179]    [Pg.266]    [Pg.176]    [Pg.367]    [Pg.73]    [Pg.21]    [Pg.25]    [Pg.134]    [Pg.181]    [Pg.50]    [Pg.66]    [Pg.193]    [Pg.1779]    [Pg.241]    [Pg.243]    [Pg.221]    [Pg.220]    [Pg.258]   
See also in sourсe #XX -- [ Pg.4 ]




SEARCH



Feed-forward

Feed-forward network architectures

Feed-forward network, artificial neural

Feed-forward neural network

Forward

Forwarder

Multilayer feed forward (MLF) networks

Multilayer feed-forward network

Neural multi-layer-feed-forward network

Neural networks feed-forward computational

Simple Feed-Forward Network Example

Three-layer forward-feed neural network

© 2024 chempedia.info