Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Back-propagation learning rule

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

The general principle behind most commonly used back-propagation learning methods is the delta rule, by which an objective function involving squares of the output errors from the network is minimized. The delta rule requires that the sigmoidal function used at each neuron be continuously differentiable. This methods identifies an error associated with each neuron for each iteration involving a cause-effect pattern. Therefore, the error for each neuron in the output layer can be represented as ... [Pg.7]

In following sections, the basis of the main components of a neuron, how it works and, finally, how a set of neurons are connected to yield an ANN are presented. A short description of the most common rules by which ANNs learn is also given (focused on the error back-propagation learning scheme). We also concentrate on how ANNs can be applied to perform regression tasks and, finally, a review of published papers dealing with applications to atomic spectrometry, most of them reported recently, is presented. [Pg.250]

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

Square nodes in the ANFIS structure denote parameter sets of the membership functions of the TSK fuzzy system. Circular nodes are static/non-modifiable and perform operations such as product or max/min calculations. A hybrid learning rule is used to accelerate parameter adaption. This uses sequential least squares in the forward pass to identify consequent parameters, and back-propagation in the backward pass to establish the premise parameters. [Pg.362]

Neural network learning algorithms BP = Back-Propagation Delta = Delta Rule QP = Quick-Propagation RP = Rprop ART = Adaptive Resonance Theory, CP = Counter-Propagation. [Pg.104]

The basic learning mechanism for networks of multilayer neurons is the generalized delta rule, commonly referred to as back propagation. This learning rule is more complex than that employed with the simple perceptron unit because of the greater information content associated with the continuous output variable compared with the binary output of the perceptron. [Pg.150]

Schneider and Oliver s approach exhibits several innovations. One is the decomposition of complex tasks, which facilitates sequential controlled processing. The end result is performance in a reasonable time frame. Most connectionist models take unreasonably long to learn simple patterns. The Schneider and Oliver model works very quickly. A second innovation is the generation and use of rules to operate on the data networks. Thus, learning occurs in two ways in this model, by back propagation following multiple presentations of stimuli and by direct instruction from the controller network. [Pg.338]

In neural net jargon, the neuron is known as a perceptron (Rosenblatt, 1958). The learning rule for these multilayer perceptrons is called the back-propagation rule. This is usually ascribed to Werbos in his thesis of 1974 (Werbos, 1993), but was popularized by Rumelhart and McClelland (1986) as recently as 1986, since when there has been a revival in interest in neural networks. [Pg.355]

Usually, back-propagation is chosen as the learning process of the ANN. Back-propagation is the generalization of the Widrow-Holf learning rule to multiple-layer networks and nonlinear differentiable transfer functions. The governing equations of the process are presented below. [Pg.115]


See other pages where Back-propagation learning rule is mentioned: [Pg.662]    [Pg.665]    [Pg.671]    [Pg.662]    [Pg.665]    [Pg.671]    [Pg.275]    [Pg.517]    [Pg.650]    [Pg.682]    [Pg.535]    [Pg.173]    [Pg.176]    [Pg.165]    [Pg.176]    [Pg.455]    [Pg.328]    [Pg.498]    [Pg.362]    [Pg.498]    [Pg.136]    [Pg.158]    [Pg.84]    [Pg.4549]    [Pg.215]    [Pg.218]    [Pg.55]    [Pg.275]    [Pg.181]    [Pg.473]    [Pg.575]   
See also in sourсe #XX -- [ Pg.662 , Pg.671 ]




SEARCH



Back-propagation

Neural back-propagation learning rule

© 2024 chempedia.info