Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Back-propagation training

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

B. Standard Error-Back-Propagation Training Routine... [Pg.7]

As a comparison, the results from a back-propagation training routine with a gradient search for PTE is... [Pg.17]

Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l). Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l).
An example of a non-covalent MIP sensor array is shown in Fig. 21.14. Xylene imprinted poly(styrenes) (PSt) and poly(methacrylates) (PMA) with 70 and 85% cross-linker have been used for the detection of o- and p-xylene. The detection has been performed in the presence of 20-60% relative humidity to simulate environmental conditions. In contrast to the calixarene/urethane layers mentioned before, p-xylene imprinted PSts still show a better sensitivity to o-xylene. The inversion of the xylene sensitivities can be gathered with PMAs and higher cross-linker ratios. As a consequence of the humidity, multivariate calibration of the array with partial least squares (PLS) and artificial neural networks (ANN) is performed, The evaluated xylene detection limits are in the lower ppm range (Table 21.2), whereas neural networks with back-propagation training and sigmoid transfer functions provide the most accurate data for o- and p-xylene concentrations as compared to PLS analyses. [Pg.524]

A fast back propagation training algorithm can be used which is available in the Neural Network Toolbox in MATLAB. [Pg.246]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

In a standard back-propagation scheme, updating the weights is done iteratively. The weights for each connection are initially randomized when the neural network undergoes training. Then the error between the target output and the network predicted output are back-propa-... [Pg.7]

Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training. Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training.
Several nonlinear QSAR methods have been proposed in recent years. Most of these methods are based on either ANN or machine learning techniques. Both back-propagation (BP-ANN) and counterpropagation (CP-ANN) neural networks [33] were used in these studies. Because optimization of many parameters is involved in these techniques, the speed of the analysis is relatively slow. More recently, Hirst reported a simple and fast nonlinear QSAR method in which the activity surface was generated from the activities of training set compounds based on some predefined mathematical functions [34]. [Pg.313]

As described in Section 44.5.5, the weights are adapted along the gradient that minimizes the error in the training set, using the back-propagation strategy. One iteration is not sufficient to reach the minimum in the error surface. Care must be taken that the sequence of input patterns is randomized at each iteration, otherwise bias can be introduced. Several (50 to 5000) iterations are typically required to reach the minimum. [Pg.674]


See other pages where Back-propagation training is mentioned: [Pg.347]    [Pg.423]    [Pg.17]    [Pg.96]    [Pg.161]    [Pg.176]    [Pg.178]    [Pg.179]    [Pg.104]    [Pg.217]    [Pg.310]    [Pg.55]    [Pg.359]    [Pg.703]    [Pg.347]    [Pg.423]    [Pg.17]    [Pg.96]    [Pg.161]    [Pg.176]    [Pg.178]    [Pg.179]    [Pg.104]    [Pg.217]    [Pg.310]    [Pg.55]    [Pg.359]    [Pg.703]    [Pg.462]    [Pg.481]    [Pg.491]    [Pg.494]    [Pg.497]    [Pg.500]    [Pg.720]    [Pg.355]    [Pg.275]    [Pg.4]    [Pg.8]    [Pg.8]    [Pg.8]    [Pg.22]    [Pg.22]    [Pg.23]    [Pg.27]    [Pg.689]    [Pg.662]    [Pg.671]   
See also in sourсe #XX -- [ Pg.24 ]




SEARCH



Back-propagation

© 2024 chempedia.info