Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network threshold

L. Wang and J. Ross, Synchronous neural networks of nonlinear threshold elements with hysteresis, Proc. Natl. Acad. Sci. USA, 87, 988-992 (1990). [Pg.143]

It has also been demonstrated that the neural network approaeh ean be utihsed to predict the 3-D backbone folding of a protein related to the proteins the neural network has been trained upon (Bohr et al, 1990). A neural network is trained upon matehing pairs of protein sequences and secondary stractrrre as well as C a distance eorrstraints (i.e. the distance between 2 residue Ca-atoms larger or smaller than a seleeted threshold... [Pg.276]

Artificial neural networks often have a layered structure as shown in Figure 8.2 (b). The first layer is the input layer. The second layer is the hidden layer. The third layer is the output layer. Learning algorithms such as back-propagation that are described in many textbooks on neural networks (Kosko 1992 Rumelhart and McClelland 1986 Zell 1994) may be used to train such networks to compute a desired output for a given input. The networks are trained by adjusting the weights as well as the thresholds. [Pg.195]

Neural networks are essentially non-linear regression models based on a binary threshold unit (McCulloch and Pitts, 1943). The structure of neural networks, called a perception, consists of a set of nodes at different layers where the node of a layer is linked with all the nodes of the next layer (Rosenblatt, 1962). The role of the input layer is to feed input patterns to intermediate layers (also called hidden layers) of units that are followed by an output result layer where the result of computation is read-off. Each one of these units is a neuron that computes a weighted sum of its inputs from other neurons at a previous layer, and outputs a one or a zero according to whether the sum is above or below a... [Pg.175]

So, the basic neuron can be seen as having two operations, summation and thresholding, as illustrated in Figure 2.5. Other forms of thresholding and, indeed, other transfer functions are commonly used in neural network modeling some of these will be discussed later. For input neurons, the transfer function is typically assumed to be unity, i.e., the input signal is passed through without modification as output to the next layer F(x) = 1.0. [Pg.24]

Later, Hensen et al. (1998) extended the work by using a jury of four differently trained glycosylation neural networks and one surface accessibility network. The surface accessibility network was used to derive a modulated threshold (Le., cutoff value for glycosylation network output) because O-glycosylation sites were found exclusively on the surface of proteins. If the site and surroundings were predicted surface accessible, the... [Pg.133]

Heaviside function A mathematical function whose value is either 0 or 1, depending upon the magnitude of the input (independent variable). One of several so-called thresholding functions used in neural networks to transform weighted sums of inputs into a neuron into a binary output response. [Pg.173]

Predictive models are built with ANN s in much the same way as they are with MLR and PLS methods descriptors and experimental data are used to fit (or train in machine-learning nomenclature) the parameters of the functions until the performance error is minimized. Neural networks differ from the previous two methods in that (1) the sigmoidal shapes of the neurons output equations better allow them to model non-linear systems and (2) they are subsymbolic , which is to say that the information in the descriptors is effectively scrambled once the internal weights and thresholds of the neurons are trained, making it difficult to examine the final equations to interpret the influences of the descriptors on the property of interest. [Pg.368]

Clearly, the constant can be included into threshold value B, so that the function /o(C) = 1 is not necessary. We must stress that in such form the probabilistic approach has no tuned parameters at all. Some tuning of naive Bayes classifier can be performed by selection of the molecular structure descriptors [or /(C)] set. This is a wonderful feature in contrast to QSAR methods, especially to Artificial Neural Networks. [Pg.194]

Figure 13 A two-layer neural network to solve the discriminant problem illustated in Figure 12. The weighting coefficients are shown adjacent to each connection and the threshold or bias for each neuron is given above each unit... Figure 13 A two-layer neural network to solve the discriminant problem illustated in Figure 12. The weighting coefficients are shown adjacent to each connection and the threshold or bias for each neuron is given above each unit...
Figure 14 Some commonly used threshold functions for neural networks the Heaviside function (a), the linear function (b), and the sigmoidal function (c)... Figure 14 Some commonly used threshold functions for neural networks the Heaviside function (a), the linear function (b), and the sigmoidal function (c)...
Figure 18 A neural network, comprising an input layer (I), a hidden layer (H), and an output layer (O). This is capable of correctly classifying the analytical data from Table 1. The required weighting coefficients are shown on each connection and the bias values for a sigmoidal threshold function are shown above each neuron... Figure 18 A neural network, comprising an input layer (I), a hidden layer (H), and an output layer (O). This is capable of correctly classifying the analytical data from Table 1. The required weighting coefficients are shown on each connection and the bias values for a sigmoidal threshold function are shown above each neuron...
Figure 5.14 Some commonly used threshold functions for neural networks the Heaviside... Figure 5.14 Some commonly used threshold functions for neural networks the Heaviside...
The basic feedforward neural network performs a non-linear transformation of the input data in order to approximate the output data. This net is composed of many simple, locally interacting, computational elements (nodes/neurons), where each node works as a simple processor. The schematic diagram of a single neuron is shown in Fig 1. The input to each i-th neuron consists of a A-dimensional vector X and a single bias (threshold) bj. Each of the input signals Xj is weighted by the appropiate weight Wij, where] = 1- N. [Pg.380]


See other pages where Neural network threshold is mentioned: [Pg.124]    [Pg.274]    [Pg.508]    [Pg.509]    [Pg.450]    [Pg.660]    [Pg.78]    [Pg.267]    [Pg.184]    [Pg.230]    [Pg.71]    [Pg.264]    [Pg.366]    [Pg.20]    [Pg.86]    [Pg.122]    [Pg.123]    [Pg.123]    [Pg.453]    [Pg.4016]    [Pg.759]    [Pg.152]    [Pg.83]    [Pg.58]    [Pg.194]    [Pg.102]    [Pg.299]    [Pg.98]    [Pg.429]    [Pg.575]    [Pg.99]    [Pg.159]    [Pg.272]    [Pg.4]   
See also in sourсe #XX -- [ Pg.362 ]




SEARCH



Neural network

Neural networking

© 2024 chempedia.info