Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural hidden unit

Fig. 44.8. (a) The structure of the neural network, for solving the XOR classification problem of Fig. 44.7. (b) The two boundary lines as defined by the hidden units in the input space (jtl, 2). (c) Representation of the objects in the space defined by the output values of the two hidden units ( hul, hu2) and the boundary line defined in this space by the output unit. The two objects of class A are at the same location. [Pg.661]

When the MLF is used for classification its non-linear properties are also important. In Fig. 44.12c the contour map of the output of a neural network with two hidden units is shown. It shows clearly that non-linear boundaries are obtained. Totally different boundaries are obtained by varying the weights, as shown in Fig. 44.12d. For modelling as well as for classification tasks, the appropriate number of transfer functions (i.e. the number of hidden units) thus depends essentially on the complexity of the relationship to be modelled and must be determined empirically for each problem. Other functions, such as the tangens hyperbolicus function (Fig. 44.13a) are also sometimes used. In Ref. [19] the authors came to the conclusion that in most cases a sigmoidal function describes non-linearities sufficiently well. Only in the presence of periodicities in the data... [Pg.669]

For fitting a neural network, it is often recommended to optimize the values of A via C V. An important issue for the number of parameters is the choice of the number of hidden units, i.e., the number of variables that are used in the hidden layer (see Section 4.8.3). Typically, 5-100 hidden units are used, with the number increasing with the number of training data variables. We will demonstrate in a simple example how the results change for different numbers of hidden units and different values of A. [Pg.236]

FIGURE 5.18 Classification with neural network for two groups of two-dimensional data. The training data are shown with the symbol corresponding to the group membership. Any new data point would be classified according to the presented decision boundaries. The number of hidden units and the weight decay were varied. [Pg.237]

Let s consider the neural network for recognition of attack. This network is multilayer perceptron with 6 input units, 40 hidden units and 23 output... [Pg.375]

One of the early problems with multilayer perceptrons was that it was not clear how to train them. The perception training rule doesn t apply directly to networks with hidden layers. Fortunately, Rumelhart and others (Rumelhart et al 1986) devised an intuitive method that quickly became adopted and revolutionized the field of artificial neural networks. The method is called back-propagation because it computes the error term as described above and propagates the error backward through the network so that weights to and from hidden units can be modified in a fashion similar to the delta rule for perceptions. [Pg.55]

Setiono, R. (1997a). Extracting rules from neural networks by pruning and hidden-unit splitting. Neural Comput 9,205-25. [Pg.158]

Hendler s (1991) hybrid model combines a semantic network with a neural network, as shown in Figure 12.5. In essence, this model depends upon the neural net to learn the internal representations (i.e., the essential microfeatures) that are associated with a set of input stimuli. Thus, the model develops the hidden unit layer of the network as well as the weights connecting the hidden units to the output units. After the network has settled (i.e., has learned to classify the inputs appropriately), the top two layers of units are accessed by the semantic network model by means of spreading activation. Thus, the nodes of the neural net communicate with the nodes of the semantic net. [Pg.337]

The FFBP distinguishes itself by the presence of one or more hidden layers, whose computation nodes are correspondingly called hidden neurons of hidden units. The function of hidden neurons is to intervene between the external input and the network output in some useful manner. By adding one or more hidden layers, the network is enabled to extract higher order statistics. In a rather loose sense, the network acquires a global perspective despite its local connectivity due to the extra set of synaptic connections and the extra dimension of NN interconnections (Hagan and Menhaj, 1994). Figure 1 depicts die structure of a FFBP neural network. [Pg.423]

Renals and Rohwer (7) found similar results. Their study, an examination of neural networks for recognizing vowels, showed that hidden units learned to respond selectively to members of the stimulus set. [Pg.66]

These studies suggest that studying post-training hidden unit activity is a valuable technique for inferring what a network has learned. Builders of neural networks might do well to consider this technique as standard operating procedure in evaluating their networks performance. [Pg.66]

Hartman, E., Keeler, K., and Kowalski, J. K., Layered Neural Networks with Gaussian hidden units as universal approximators. Neural Comput. 2, 210 (1990). [Pg.189]

P. Gorman and T. Sejnowski, Neural Networks, 1,75 (1988). Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets. [Pg.130]

A neural network approach was also applied for water solubility using the same training set of 331 organic compounds [6, 7]. A three-layer neural network was used. It contained 17 input neurons and one output neuron, and the number of hidden units was varied in order to determine the optimum architecture. Best results were obtained with five hidden units. The standard deviation ) from this neural network approach is 0.27, slightly better than that of the regression analysis, 0.30. The predictive power of this model (0.34) is also slightly better than that of regression analysis (0.36). [Pg.582]


See other pages where Neural hidden unit is mentioned: [Pg.757]    [Pg.660]    [Pg.378]    [Pg.186]    [Pg.236]    [Pg.237]    [Pg.112]    [Pg.176]    [Pg.362]    [Pg.34]    [Pg.103]    [Pg.108]    [Pg.111]    [Pg.125]    [Pg.131]    [Pg.131]    [Pg.145]    [Pg.146]    [Pg.153]    [Pg.153]    [Pg.156]    [Pg.173]    [Pg.173]    [Pg.70]    [Pg.93]    [Pg.1789]    [Pg.1207]    [Pg.187]    [Pg.2277]    [Pg.258]    [Pg.548]    [Pg.1387]    [Pg.353]    [Pg.577]    [Pg.124]   
See also in sourсe #XX -- [ Pg.660 , Pg.677 ]




SEARCH



Hidden

© 2024 chempedia.info