Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network hidden layers

Fig. 10.8 (a) Example of common neural net (perceptron) architecture. Here one hidden layer Neural Networks (NNs) is shown (Hierlemann et al., 1996). (b) A more sophisticated recurrent neural network utilizing adjustable feedback through recurrent variables, (c) Time-delayed neural network in which time has been utilized as an experimental variable... [Pg.326]

In theory one hidden layer neural network is sufficient to describe all input/output relations. More hidden layers can be introduced to reduce the number of neurons compared to the number of neurons in a single layer neural network. The same argument holds for the type of activation function and the choice of the optimisation algorithm. However, the emphasis of this work is not directed on the selection of the best neural network structure, activation function and training protocol, but to the application of neural networks as a means of non-linear function fit. [Pg.58]

Kolmogorov s Theorem (Reformulated by Hecht-Nielson) Any real-valued continuous function f defined on an N-dimensional cube can be implemented by a three layered neural network consisting of 2N -)-1 neurons in the hidden layer with transfer functions from the input to the hidden layer and (f> from all of... [Pg.549]

Hartman, E., Keeler, K., and Kowalski, J. K., Layered Neural Networks with Gaussian hidden rmits as uttiversal approximators. Neural Comput. 2, 210 (1990). [Pg.204]

Each set of mathematical operations in a neural network is called a layer, and the mathematical operations in each layer are called neurons. A simple layer neural network might take an unknown spectrum and pass it through a two-layer network where the first layer, called a hidden layer, computes a basis function from the distances of the unknown to each reference signature spectrum, and the second layer, called an output layer, that combines the basis functions into a final score for the unknown sample. [Pg.156]

The FFBP distinguishes itself by the presence of one or more hidden layers, whose computation nodes are correspondingly called hidden neurons of hidden units. The function of hidden neurons is to intervene between the external input and the network output in some useful manner. By adding one or more hidden layers, the network is enabled to extract higher order statistics. In a rather loose sense, the network acquires a global perspective despite its local connectivity due to the extra set of synaptic connections and the extra dimension of NN interconnections (Hagan and Menhaj, 1994). Figure 1 depicts die structure of a FFBP neural network. [Pg.423]

As a note of interest, Qin McAvoy (1992) have shown that NNPLS models can be collapsed to multilayer perceptron architectures. In this case it was therefore possible to represent the best NNPLS model in the form of a single layer neural network with 29 hidden nodes using tan-sigmoidal activation functions and an output layer of 146 nodes with purely linear functions. [Pg.443]

The cascade correlation architecture was proposed by Fahhnan and Lebiere (1990). The process of network building starts with a one-layer neural network and hidden neurons are added as needed. The network architecture is shown in Fig. 19.27. In each training step, a new hidden neuron is added and its weights are adjusted to maximize the magnitude... [Pg.2051]

In spite of being actually partitioned into L+1 layers, a neural network with such an architecture is conventionally called an L-layer network (due to the fact that signals undergo transformations only in the layers of hidden and output neurons, not in the input layer). In particular, a one-layer network is a layered neural network without hidden neurons, whereas a two-layer network is a neural network in which only connections from input to hidden neurons and from hidden to output neurons are possible. [Pg.83]

A neural network approach was also applied for water solubility using the same training set of 331 organic compounds [6, 7]. A three-layer neural network was used. It contained 17 input neurons and one output neuron, and the number of hidden units was varied in order to determine the optimum architecture. Best results were obtained with five hidden units. The standard deviation ) from this neural network approach is 0.27, slightly better than that of the regression analysis, 0.30. The predictive power of this model (0.34) is also slightly better than that of regression analysis (0.36). [Pg.582]

Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),... Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),...
The neural network shown in Figure 10.24 is in the proeess of being trained using a BPA. The eurrent inputs x and 2 have values of 0.2 and 0.6 respeetively, and the desired output dj = 1. The existing weights and biases are Hidden layer... [Pg.355]

Fig. 10.43 Neural network structure for Example 10.9. Hidden layer... Fig. 10.43 Neural network structure for Example 10.9. Hidden layer...
In neural network design, the above parameters have no precise number/answers because it is dependent on the particular application. However, the question is worth addressing. In general, the more patterns and the fewer hidden neurons to be used, the better the network. It should be realized that there is a subtle relationship between the number of patterns and the number of hidden layer neurons. Having too few patterns or too many hidden neurons can cause the network to memorize. When memorization occurs, the network would perform well during training, but tests poorly with a new data set. [Pg.9]

The specific volumes of all the nine siloxanes were predicted as a function of temperature and the number of monofunctional units, M, and difunctional units, D. A simple 3-4-1 neural network architecture with just one hidden layer was used. The three input nodes were for the number of M groups, the number of D groups, and the temperature. The hidden layer had four neurons. The predicted variable was the specific volumes of the silox-... [Pg.11]

Figure 6 Schematic of a typical neural network training process. I-input layer H-hidden layer 0-output layer B-bias neuron. Figure 6 Schematic of a typical neural network training process. I-input layer H-hidden layer 0-output layer B-bias neuron.
Viscosities of the siloxanes were predicted over a temperature range of 298-348 K. The semi-log plot of viscosity as a function of temperature was linear for the ring compounds. However, for the chain compounds, the viscosity increased rapidly with an increase in the chain length of the molecule. A simple 2-4-1 neural network architecture was used for the viscosity predictions. The molecular configuration was not considered here because of the direct positive effect of addition of both M and D groups on viscosity. The two input variables, therefore, were the siloxane type and the temperature level. Only one hidden layer with four nodes was used. The predicted variable was the viscosity of the siloxane. [Pg.12]

A very simple 2-4-1 neural network architecture with two input nodes, one hidden layer with four nodes, and one output node was used in each case. The two input variables were the number of methylene groups and the temperature. Although neural networks have the ability to learn all the differences, differentials, and other calculated inputs directly from the raw data, the training time for the network can be reduced considerably if these values are provided as inputs. The predicted variable was the density of the ester. The neural network model was trained for discrete numbers of methylene groups over the entire temperature range of 300-500 K. The... [Pg.15]

Cybenko, G., Continuous valued neural networks with two hidden layer are sufficient, Technical Report, Department of Computer Science, Tufts University (1988). [Pg.98]

A feedforward neural network brings together several of these little processors in a layered structure (Figure 9). The network in Figure 9 is fully connected, which means that every neuron in one layer is connected to every neuron in the next layer. The first layer actually does no processing it merely distributes the inputs to a hidden layer of neurons. These neurons process the input, and then pass the result of their computation on to the output layer. If there is a second hidden layer, the process is repeated until the output layer is reached. [Pg.370]

FIGURE 4.29 Schematic of a neural network with a single hidden layer. [Pg.186]


See other pages where Neural network hidden layers is mentioned: [Pg.151]    [Pg.342]    [Pg.118]    [Pg.151]    [Pg.342]    [Pg.118]    [Pg.500]    [Pg.3]    [Pg.379]    [Pg.366]    [Pg.360]    [Pg.334]    [Pg.218]    [Pg.454]    [Pg.27]    [Pg.745]    [Pg.757]    [Pg.450]    [Pg.481]    [Pg.693]    [Pg.660]    [Pg.662]    [Pg.33]    [Pg.39]    [Pg.53]    [Pg.61]    [Pg.370]    [Pg.380]    [Pg.250]   
See also in sourсe #XX -- [ Pg.89 ]




SEARCH



Hidden

Layer hidden

Layered network

Layered neural network

Layers, neural network

Network layer

Neural network

Neural networking

© 2024 chempedia.info