Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Adaline networks

The problem was first addressed by Cover ([cover64], [cover65]), who found a lower bound on the minimum number of synaptic weights needed for an ADALINE network (see section 10.5.1) to realize any Boolean function. More recently. Cover s early work has been extended by Baum ([baumSSa], [baum89]). who has found a... [Pg.550]

Derks et al. [70] employed ANNs to cancel out noise in ICP. The results of neural networks (an Adaline network and a multi-layer feed-forward network) were compared with the more conventional Kalman filter. [Pg.272]

Some historically important artificial neural networks are Hopfield Networks, Per-ceptron Networks and Adaline Networks, while the most well-known are Backpropa-gation Artificial Neural Networks (BP-ANN), Kohonen Networks (K-ANN, or Self-Organizing Maps, SOM), Radial Basis Function Networks (RBFN), Probabilistic Neural Networks (PNN), Generalized Regression Neural Networks (GRNN), Learning Vector Quantization Networks (LVQ), and Adaptive Bidirectional Associative Memory (ABAM). [Pg.59]

Between 1959 and 1960, Bernard Wildrow and Marcian Hoff of Stanford University, in the United States developed the ADALINE (ADAptive LINear Elements) and MADELINE (Multiple ADAptive LINear Elements) models. These were the first neural networks that could be applied to real problems. The ADALAINE model is used as a filter to remove echoes from telephone lines. The capabilities of these models were again proven limited by Minsky and Papert (1969). The period between 1969 and 1981 resulted in much attention toward neural networks. The capabilities of artificial neural networks were completely blown out of proportion by writers and producers of books and movies. People believed that such neural networks could do anything, resulting in disappointment when people realized that this was not so. [Pg.913]

This rule is used in the adaline layer of the adaline and madaline networks, for which the only allowed values for are +1 and -1. Weights can change even if the output is correct. [Pg.83]

In the madaline network, the PEs in the adaline layer compete for learning, the winner being the PE whose weighted sum is closest to zero but with the wrong output. Only the winning PE learns. The learning rate is usually set to 1. [Pg.83]

D. Specht,/EE TfNeural Networks, 1, 111 (1990). Probabilistic Neural Networks and the Pol3rnomial Adaline as Complementary Techniques for Classification. [Pg.136]

Widrow, B. 1962. Generalizatioii and information storage in networks of adaline Neurons. In Selforganizing Systems, Jovitz, M.C., Jacobi, G.T. and Goldstein, G. eds. pp. 435-461. Sparten Books, Washington, D.C. [Pg.2063]


See other pages where Adaline networks is mentioned: [Pg.122]    [Pg.122]    [Pg.8]    [Pg.797]    [Pg.650]    [Pg.86]    [Pg.2039]   
See also in sourсe #XX -- [ Pg.63 , Pg.83 , Pg.86 ]




SEARCH



Adalin

Neural networks ADALINE

© 2024 chempedia.info