Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Back Propagation

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

If the performance index or cost function J takes the form of a summed squared error function, then [Pg.351]

If the activation function is the sigmoid function given in equation (10.56), then its derivative is [Pg.352]

Since/(.v) is the neuron output yj, then equation (10.61) can be written as [Pg.352]

From equation (10.60), again using the chain rule, [Pg.352]


Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

To understand neural networks, especially Kohonen, counter-propagation and back-propagation networks, and their applications... [Pg.439]

Kohonen network Conceptual clustering Principal Component Analysis (PCA) Decision trees Partial Least Squares (PLS) Multiple Linear Regression (MLR) Counter-propagation networks Back-propagation networks Genetic algorithms (GA)... [Pg.442]

Supeiwised learning strategies are applied in counter-propagation and in back-propagation neural networks (see Sections 9.5.5 and 9.5,7, ... [Pg.455]

Besides the artihcial neural networks mentioned above, there are various other types of neural networks. This chapter, however, will confine itself to the three most important types used in chemoinformatics Kohonen networks, counter-propagation networks, and back-propagation networks. [Pg.455]

A back-propagation network usually consists of input units, one or more hidden layers and one output layer. Figure 9-16 gives an example of the architecture. [Pg.462]

Figure 9-23. Weight correction for a back-propagation network. Figure 9-23. Weight correction for a back-propagation network.
A detailed mathematical explanation of the adaptation of the weights is given, c.g.. ill Ref, [lOJ, The original publication of back-propagation learning is to be found in Ref. [13. ... [Pg.463]

Kohonen network Counter-propagation Back-propagation... [Pg.465]

Association deals with the extraction of relationships among members of a data set. The methods applied for association range from rather simple ones, e.g., correlation analysis, to more sophisticated methods like counter-propagation or back-propagation neural networks (see Sections 9.5.5 and 9.5.7). [Pg.473]

Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]

Recently, several QSPR solubility prediction models based on a fairly large and diverse data set were generated. Huuskonen developed the models using MLRA and back-propagation neural networks (BPG) on a data set of 1297 diverse compoimds [22]. The compounds were described by 24 atom-type E-state indices and six other topological indices. For the 413 compoimds in the test set, MLRA gave = 0.88 and s = 0.71 and neural network provided... [Pg.497]

Step 6 Building a Back-Propagation (BPC) Neural Network Model... [Pg.500]

Figure 10.1-3. Predicted versus experimental solubility values of 552 compounds in the test set by a back-propagation neural network with 18 topological descriptors. Figure 10.1-3. Predicted versus experimental solubility values of 552 compounds in the test set by a back-propagation neural network with 18 topological descriptors.
Error back-propagation through plant model... [Pg.361]

Square nodes in the ANFIS structure denote parameter sets of the membership functions of the TSK fuzzy system. Circular nodes are static/non-modifiable and perform operations such as product or max/min calculations. A hybrid learning rule is used to accelerate parameter adaption. This uses sequential least squares in the forward pass to identify consequent parameters, and back-propagation in the backward pass to establish the premise parameters. [Pg.362]

Feedforward Back-propagation Neural Network %Network structure l 10(tansig) l(purelin)... [Pg.423]

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

B. Standard Error-Back-Propagation Training Routine... [Pg.7]

In a standard back-propagation scheme, updating the weights is done iteratively. The weights for each connection are initially randomized when the neural network undergoes training. Then the error between the target output and the network predicted output are back-propa-... [Pg.7]

The general principle behind most commonly used back-propagation learning methods is the delta rule, by which an objective function involving squares of the output errors from the network is minimized. The delta rule requires that the sigmoidal function used at each neuron be continuously differentiable. This methods identifies an error associated with each neuron for each iteration involving a cause-effect pattern. Therefore, the error for each neuron in the output layer can be represented as ... [Pg.7]

The error signal from the neurons in the output layer can be easily identified. This is not so for neurons in the hidden layers. Back-propagation overcomes this difficulty by propagating the error signal backward through the network. Hence, for the hidden layers, the error signal is obtained by ... [Pg.8]


See other pages where Back Propagation is mentioned: [Pg.442]    [Pg.450]    [Pg.462]    [Pg.462]    [Pg.481]    [Pg.491]    [Pg.494]    [Pg.497]    [Pg.498]    [Pg.500]    [Pg.720]    [Pg.347]    [Pg.351]    [Pg.354]    [Pg.355]    [Pg.356]    [Pg.423]    [Pg.75]    [Pg.275]    [Pg.4]    [Pg.7]   
See also in sourсe #XX -- [ Pg.356 , Pg.362 ]

See also in sourсe #XX -- [ Pg.8 , Pg.22 ]

See also in sourсe #XX -- [ Pg.195 ]

See also in sourсe #XX -- [ Pg.33 , Pg.52 , Pg.55 , Pg.56 , Pg.93 ]

See also in sourсe #XX -- [ Pg.455 ]

See also in sourсe #XX -- [ Pg.118 ]

See also in sourсe #XX -- [ Pg.231 , Pg.242 ]

See also in sourсe #XX -- [ Pg.193 ]




SEARCH



Adjoint Frechet derivative operator and back-propagating elastic field

Artificial neural networks back-propagation

Back propagation network pattern recognition

Back propagation technique

Back-pressure drive flame propagation

Back-pressure drive flame propagation theory

Back-propagating axon potentials

Back-propagation algorithm

Back-propagation learning

Back-propagation learning algorithm

Back-propagation learning rule

Back-propagation methods

Back-propagation network

Back-propagation neural network applications

Back-propagation neural networks

Back-propagation training

Error back-propagation artificial

Error back-propagation artificial neural

Error back-propagation artificial neural networks

Feed forward back propagation

Neural back-propagation learning rule

Neural networks feedforward back-propagation

Standard back-propagation

Standard error-back-propagation

The Standard Back Propagation Algorithm

© 2024 chempedia.info