Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Simple Feed-Forward Network Example

Weights can also be seen as a vector W, for this example with values [Pg.26]

Further assume that the transfer function for the input units is unity, i.e., whatever input is received is passed through without change to the next layer, F(x) = x. Assume a simple threshold function (Heaviside function) for the single output unit. [Pg.26]

Computation of the output is as follows from equation 2.3 (simplified in this case since there is only one output neuron)  [Pg.26]

A different set of inputs, for example IT = [.5,. 1,. 1], would have yielded an output of 0 (IT is the transpose of vector I). The same principle of information feed-forward, weighted sums and transformation applies with multiple units in each layer and, indeed, with multiple layers. Multiple layered networks will be discussed in the next chapter. [Pg.26]


To explain the back propagation algorithm, a simple feed-forward neural network of three layers (input, hidden and output) is used. The network input will be denoted by Xj, the output of the hidden neurons by Hi, that of the output neurons by >i. The weights of the links between the input and the hidden layer are written as w,j, where i refers to the nmnber of the input neuron and j to the number of the hidden neuron. The weights of the links between the hidden and the output layer are denoted as again j stands for the number of the hidden neuron and k for the number of the output neuron. An example network with notation is shown in Fig. 27.3, This network has three input neurons, three hidden neurons and two output neurons, in this case the input layer passes on the inputs, i.e. Ij = Xj. [Pg.364]

As a chemometric quantitative modeling technique, ANN stands far apart from all of the regression methods mentioned previously, for several reasons. First of all, the model structure cannot be easily shown using a simple mathematical expression, but rather requires a map of the network architecture. A simplified example of a feed-forward neural network architecture is shown in Figure 8.17. Such a network structure basically consists of three layers, each of which represent a set of data values and possibly data processing instructions. The input layer contains the inputs to the model (11-14). [Pg.264]

As a last resort it is possible to apply neural networks (NN). NN can in principle model surfaces with any complexity. However, the number of experiments required is laige. This, together with the fact that NN is a rather specialised technique, explains that the number of applications in the literature is limited. Examples are to be found in 70-72). In the latter application two variables (pH and modifier content) are investigated for four chlorophenols and the authors found that when 15 to 20 experiments are carried out, better results are obtained with a multi-layer feed-forward NN than when using quadratic or third-order models. Although we believe that for the optimization of separations, NN will prove practical only in few cases, it seems useful to explain the first principles of the methodology here. A simple network is shown in Fig. 6.25. [Pg.208]


See other pages where Simple Feed-Forward Network Example is mentioned: [Pg.25]    [Pg.25]    [Pg.359]    [Pg.2401]   


SEARCH



Feed-forward

Feed-forward networks

Forward

Forwarder

Simple example

© 2024 chempedia.info