Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sigmoid transfer function

The net signal is then modified by a so-called transfer function and sent as output to other neurons. The most widely used transfer function is sigmoidal it has two plateau areas having the values zero and one. and between these an area in which it is increasing nonlinearly. Figure 9-15 shows an example of a sigmoidal transfer function. [Pg.453]

Figure 9-15. Sigmoidal transfer function, in which the area between the plateaus does not increase linearly. Figure 9-15. Sigmoidal transfer function, in which the area between the plateaus does not increase linearly.
Fig. 6.18. Schematic representation of a multilayer perceptron with two input neurons, three hidden neurons (with sigmoid transfer functions), and two output neurons (with sigmoid transfer functions, too)... Fig. 6.18. Schematic representation of a multilayer perceptron with two input neurons, three hidden neurons (with sigmoid transfer functions), and two output neurons (with sigmoid transfer functions, too)...
One of the best methods of analyzing the responses from the array is ANN [24]. The common back-propagation ANN with one input per sensor variable, one hidden layer and one or more output nodes for the predicted parameter(s) are mostly used. A sigmoidal transfer function is applied on the input responses ... [Pg.71]

The number of units in the input layer is determined by the number of features used in the input vector. The number of units in the output layer is determined by the number of categories of output required. However, it is important to note that there is no best way to determine the number of hidden layers and the number of units in each layer. There is theoretical work to show that a network with sigmoid transfer functions with one hidden layer, given an appropriate number of units, can represent any desired transformation from input to output (Homik et al., 1989) within a given range of desired precision. Most... [Pg.35]

An example of a non-covalent MIP sensor array is shown in Fig. 21.14. Xylene imprinted poly(styrenes) (PSt) and poly(methacrylates) (PMA) with 70 and 85% cross-linker have been used for the detection of o- and p-xylene. The detection has been performed in the presence of 20-60% relative humidity to simulate environmental conditions. In contrast to the calixarene/urethane layers mentioned before, p-xylene imprinted PSts still show a better sensitivity to o-xylene. The inversion of the xylene sensitivities can be gathered with PMAs and higher cross-linker ratios. As a consequence of the humidity, multivariate calibration of the array with partial least squares (PLS) and artificial neural networks (ANN) is performed, The evaluated xylene detection limits are in the lower ppm range (Table 21.2), whereas neural networks with back-propagation training and sigmoid transfer functions provide the most accurate data for o- and p-xylene concentrations as compared to PLS analyses. [Pg.524]

Sigmoidal transfer function, 150 Signal averaging, 34 Signal, band limited, 27... [Pg.216]

Fig. 4 The architecture of (a) a. single iayer neurai network with the. sigmoidal transfer function, as well as the wavelet neural network for (h) IR spectral data compre.ssion. and (c) pattern recognition in ilV-VIS spectroscopy. Fig. 4 The architecture of (a) a. single iayer neurai network with the. sigmoidal transfer function, as well as the wavelet neural network for (h) IR spectral data compre.ssion. and (c) pattern recognition in ilV-VIS spectroscopy.
The network was formulated as a two-layered log-sigmoid/log-sigmoid network in which the log-sigmoid transfer function was employed since its output range was perfect for learning the output bipolar values, i.e. 0 and 1. The hidden layer had 29 nemons after trial test (for details, please see Table Al. 1 in the Appendix). In order to identify the class for each input vector, the network was trained to output a value of 1 in the correct position of the output vector and fill the rest of the output vector with O s. Since the exact I s and O s could not be produced by the output of the network during the simulation process, it was necessary to pass the output through the competitive transfer function compete in order to ensure that the output value must be 1 while the others have a value of 0. [Pg.50]

Fig. 2. Typical sigmoidal transfer function, where I = EwjXi. Fig. 2. Typical sigmoidal transfer function, where I = EwjXi.
In case of ANN, nonlinear transfer functions are used in the hidden neurons. The output of the k hidden neuron having log-sigmoid transfer function can be expressed as follows ... [Pg.41]


See other pages where Sigmoid transfer function is mentioned: [Pg.655]    [Pg.662]    [Pg.665]    [Pg.673]    [Pg.679]    [Pg.380]    [Pg.543]    [Pg.730]    [Pg.360]    [Pg.34]    [Pg.132]    [Pg.150]    [Pg.977]    [Pg.157]    [Pg.248]    [Pg.86]    [Pg.104]    [Pg.125]    [Pg.390]    [Pg.703]    [Pg.372]    [Pg.85]    [Pg.1816]    [Pg.36]    [Pg.42]    [Pg.45]    [Pg.120]    [Pg.124]   
See also in sourсe #XX -- [ Pg.85 ]

See also in sourсe #XX -- [ Pg.78 ]




SEARCH



Sigmoid

Sigmoid function

Sigmoidal

Sigmoidal function

Sigmoidal transfer function

Sigmoiditis

Transfer function

Transfer function functions

Transference function

© 2024 chempedia.info