Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Algorithms, supervised learning

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

There are literally dozens of kinds of neural network architectures in use. A simple taxonomy divides them into two types based on learning algorithms (supervised, unsupervised) and into subtypes based upon whether they are feed-forward or feedback type networks. In this chapter, two other commonly used architectures, radial basis functions and Kohonen self-organizing architectures, will be discussed. Additionally, variants of multilayer perceptrons that have enhanced statistical properties will be presented. [Pg.41]

In addition to the described training algorithms for supervised learning, several practical problems need to be addressed during training. One serious problem with multilayer... [Pg.59]

Mailer, M. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6,525-33. [Pg.126]

Abstract. Artificial neural networks (ANN) are useful components in today s data analysis toolbox. They were initially inspired by the brain but are today accepted to be quite different from it. ANN typically lack scalability and mostly rely on supervised learning, both of which are biologically implausible features. Here we describe and evaluate a novel cortex-inspired hybrid algorithm. It is found to perform on par with a Support Vector Machine (SVM) in classification of activation patterns from the rat olfactory bulb. On-line unsupervised learning is shown to provide significant tolerance to sensor drift, an important property of algorithms used to analyze chemo-sensor data. Scalability of the approach is illustrated on the MNIST dataset of handwritten digits. [Pg.34]


See other pages where Algorithms, supervised learning is mentioned: [Pg.442]    [Pg.462]    [Pg.418]    [Pg.688]    [Pg.114]    [Pg.367]    [Pg.160]    [Pg.123]    [Pg.169]    [Pg.302]    [Pg.140]    [Pg.109]    [Pg.18]    [Pg.92]    [Pg.105]    [Pg.165]    [Pg.176]    [Pg.455]    [Pg.267]    [Pg.2401]    [Pg.584]    [Pg.663]    [Pg.580]    [Pg.581]    [Pg.66]    [Pg.331]    [Pg.312]    [Pg.1335]    [Pg.331]    [Pg.43]    [Pg.44]    [Pg.46]    [Pg.47]    [Pg.89]    [Pg.129]    [Pg.135]    [Pg.347]    [Pg.194]    [Pg.197]    [Pg.243]   
See also in sourсe #XX -- [ Pg.1166 ]




SEARCH



Algorithmic learning

Supervised

Supervised learning

© 2024 chempedia.info