Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network sigmoid function

A sigmoid (s-shaped) is a continuous function that has a derivative at all points and is a monotonically increasing function. Here 5,p is the transformed output asymptotic to 0 < 5/,p I and w,.p is the summed total of the inputs (- 00 < Ui p < -I- 00) for pattern p. Hence, when the neural network is presented with a set of input data, each neuron sums up all the inputs modified by the corresponding connection weights and applies the transfer function to the summed total. This process is repeated until the network outputs are obtained. [Pg.3]

When the MLF is used for classification its non-linear properties are also important. In Fig. 44.12c the contour map of the output of a neural network with two hidden units is shown. It shows clearly that non-linear boundaries are obtained. Totally different boundaries are obtained by varying the weights, as shown in Fig. 44.12d. For modelling as well as for classification tasks, the appropriate number of transfer functions (i.e. the number of hidden units) thus depends essentially on the complexity of the relationship to be modelled and must be determined empirically for each problem. Other functions, such as the tangens hyperbolicus function (Fig. 44.13a) are also sometimes used. In Ref. [19] the authors came to the conclusion that in most cases a sigmoidal function describes non-linearities sufficiently well. Only in the presence of periodicities in the data... [Pg.669]

Other sigmoidal functions, such as the hyperbolic tangent function, are also commonly used. Finally, Radial Basis Function neural networks, to be described later, use a symmetric function, typically a Gaussian function. [Pg.25]

Predictive models are built with ANN s in much the same way as they are with MLR and PLS methods descriptors and experimental data are used to fit (or train in machine-learning nomenclature) the parameters of the functions until the performance error is minimized. Neural networks differ from the previous two methods in that (1) the sigmoidal shapes of the neurons output equations better allow them to model non-linear systems and (2) they are subsymbolic , which is to say that the information in the descriptors is effectively scrambled once the internal weights and thresholds of the neurons are trained, making it difficult to examine the final equations to interpret the influences of the descriptors on the property of interest. [Pg.368]

An example of a non-covalent MIP sensor array is shown in Fig. 21.14. Xylene imprinted poly(styrenes) (PSt) and poly(methacrylates) (PMA) with 70 and 85% cross-linker have been used for the detection of o- and p-xylene. The detection has been performed in the presence of 20-60% relative humidity to simulate environmental conditions. In contrast to the calixarene/urethane layers mentioned before, p-xylene imprinted PSts still show a better sensitivity to o-xylene. The inversion of the xylene sensitivities can be gathered with PMAs and higher cross-linker ratios. As a consequence of the humidity, multivariate calibration of the array with partial least squares (PLS) and artificial neural networks (ANN) is performed, The evaluated xylene detection limits are in the lower ppm range (Table 21.2), whereas neural networks with back-propagation training and sigmoid transfer functions provide the most accurate data for o- and p-xylene concentrations as compared to PLS analyses. [Pg.524]

Figure 14 Some commonly used threshold functions for neural networks the Heaviside function (a), the linear function (b), and the sigmoidal function (c)... Figure 14 Some commonly used threshold functions for neural networks the Heaviside function (a), the linear function (b), and the sigmoidal function (c)...
Figure 18 A neural network, comprising an input layer (I), a hidden layer (H), and an output layer (O). This is capable of correctly classifying the analytical data from Table 1. The required weighting coefficients are shown on each connection and the bias values for a sigmoidal threshold function are shown above each neuron... Figure 18 A neural network, comprising an input layer (I), a hidden layer (H), and an output layer (O). This is capable of correctly classifying the analytical data from Table 1. The required weighting coefficients are shown on each connection and the bias values for a sigmoidal threshold function are shown above each neuron...
Autoassociative neural networks provide a special five-layer network structure (Figure 3.6) that can implement nonlinear PCA by reducing variable dimensionality and producing a feature space map that retains the maximum possible amount of information from the original data set [150]. Autoassociative neural networks use conventional feedforward connections and sigmoidal or linear nodal transfer functions. [Pg.63]

Figure 3.6. Network architecture for determination of / nonlinear factors using an autoassociative neural network, a indicates nodes with sigmoidal functions, indicates nodes with sigmoidal or linear functions [150]. Figure 3.6. Network architecture for determination of / nonlinear factors using an autoassociative neural network, a indicates nodes with sigmoidal functions, indicates nodes with sigmoidal or linear functions [150].
As suggested in reference [25], the traditional sigmoidal function can be replaced with the Morlet wavelet basis function Fqwt in neural network analysis (Fig. 4(b)). When a spectral data, X, is applied to this WNN system, a response or an output value Ydwt >s obtained as follows ... [Pg.248]

Fig. 4 The architecture of (a) a. single iayer neurai network with the. sigmoidal transfer function, as well as the wavelet neural network for (h) IR spectral data compre.ssion. and (c) pattern recognition in ilV-VIS spectroscopy. Fig. 4 The architecture of (a) a. single iayer neurai network with the. sigmoidal transfer function, as well as the wavelet neural network for (h) IR spectral data compre.ssion. and (c) pattern recognition in ilV-VIS spectroscopy.
Ascertaining the neuron number of hidden layer. Flidden layer can be one layer or multilayer. Kolmogrov theory has proved that the three layer Sigmoid neurons in the BP neural network can approximate any continuous function as long as it has enough hidden nodes. So the hidden layer number is 1 in safety assessment model of oil depot. [Pg.1206]

As a note of interest, Qin McAvoy (1992) have shown that NNPLS models can be collapsed to multilayer perceptron architectures. In this case it was therefore possible to represent the best NNPLS model in the form of a single layer neural network with 29 hidden nodes using tan-sigmoidal activation functions and an output layer of 146 nodes with purely linear functions. [Pg.443]

The output is calculated in two steps first, the input and output signals are delayed to different degrees. Second a nonlinear aetivation fimetion /( ) (here a static neural network) estimates the output. In (Nelles 2001) a sigmoid fimetion is proposed for the nonlinear activation function, which is used in this eontext. Other fimetions for nonlinear dynamie modeling e.g. Ham-merstein models, Wiener models, neural or wavelet network are also possible. [Pg.232]

In order to describe the nonlinear stress-strain relations of the marine soft soil, a single hidden layer BP model was setup with the use of neural network technology. For the model, the input values are bias stress, confining pressure and time, the output value is the strain. Therefore, nodes of the input layer is 3, the number of nodes of the output layer is 1. The number of hidden layer units ranging from 5 to 25, and it need to be determined based on the training and fitting results. The neurons in the hidden layer is a sigmoid transform function, the neurons of the... [Pg.453]


See other pages where Neural network sigmoid function is mentioned: [Pg.509]    [Pg.662]    [Pg.679]    [Pg.39]    [Pg.61]    [Pg.543]    [Pg.159]    [Pg.104]    [Pg.194]    [Pg.195]    [Pg.366]    [Pg.39]    [Pg.61]    [Pg.423]    [Pg.20]    [Pg.34]    [Pg.174]    [Pg.186]    [Pg.336]    [Pg.82]    [Pg.220]    [Pg.977]    [Pg.158]    [Pg.429]    [Pg.443]    [Pg.248]    [Pg.513]    [Pg.385]    [Pg.138]    [Pg.84]    [Pg.258]    [Pg.104]    [Pg.550]   
See also in sourсe #XX -- [ Pg.363 ]




SEARCH



Network functionality

Neural network

Neural networking

Sigmoid

Sigmoid function

Sigmoidal

Sigmoidal function

Sigmoiditis

© 2024 chempedia.info