Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Back-propagation methods

The four experiments done previously with Rnp (= 0.5, 1, 3, 4) were used to train the neural network and the experiment with / exp = 2 was used to validate the system. Dynamic models of process-model mismatches for three state variables (i.e. X) of the system are considered here. They are the instant distillate composition (xD), accumulated distillate composition (xa) and the amount of distillate (Ha). The inputs and outputs of the network are as in Figure 12.2. A multilayered feed forward network, which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, is used in this work. The error between the actual mismatch (obtained from simulation and experiments) and that predicted by the network is used as the error signal to train the network as described earlier. [Pg.376]

Among the nonlinear methods, there are, besides nonlinear least squares regression, i.e. polynomial regression, the nonlinear PLS method. Alternating Conditional Expectations ACE), SMART, and MARS. Moreover, some Artificial Neural Networks techniques have also to be considered among nonlinear regression methods, such as the back-propagation method. [Pg.63]

Y. Fukuoka, H. Matsuki, H. Minamitani, and A. Ishida, Neural Networks, 11,1059 (1998). A Modified Back-Propagation Method to Avoid False Local Minima. [Pg.140]

We train the network via the fast convergence gradient-descend back-propagation method with momentum term for the non-negative energy function (Yam and Chow, 2000). [Pg.158]

The essence of the development of the ANN is to determine the weights and fixed using the results of the registration board flights. For the algorithm used to create the torque back propagation method, in which the variables are constant learning a and p momentum. [Pg.2032]

In the learning or training of the TDNN, the back propagation method was used. The weight was initialized between -0.5 and 0.5. The cepstral coefficient obtained from... [Pg.566]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Association deals with the extraction of relationships among members of a data set. The methods applied for association range from rather simple ones, e.g., correlation analysis, to more sophisticated methods like counter-propagation or back-propagation neural networks (see Sections 9.5.5 and 9.5.7). [Pg.473]

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

The general principle behind most commonly used back-propagation learning methods is the delta rule, by which an objective function involving squares of the output errors from the network is minimized. The delta rule requires that the sigmoidal function used at each neuron be continuously differentiable. This methods identifies an error associated with each neuron for each iteration involving a cause-effect pattern. Therefore, the error for each neuron in the output layer can be represented as ... [Pg.7]

Several nonlinear QSAR methods have been proposed in recent years. Most of these methods are based on either ANN or machine learning techniques. Both back-propagation (BP-ANN) and counterpropagation (CP-ANN) neural networks [33] were used in these studies. Because optimization of many parameters is involved in these techniques, the speed of the analysis is relatively slow. More recently, Hirst reported a simple and fast nonlinear QSAR method in which the activity surface was generated from the activities of training set compounds based on some predefined mathematical functions [34]. [Pg.313]

The back-propagation strategy is a steepest gradient method, a local optimization technique. Therefore, it also suffers from the major drawback of these methods, namely that it can become locked in a local optimum. Many variants have been developed to overcome this drawback [20-24]. None of these does however really solve the problem. [Pg.677]


See other pages where Back-propagation methods is mentioned: [Pg.484]    [Pg.367]    [Pg.97]    [Pg.421]    [Pg.422]    [Pg.272]    [Pg.201]    [Pg.84]    [Pg.701]    [Pg.227]    [Pg.51]    [Pg.215]    [Pg.423]    [Pg.419]    [Pg.484]    [Pg.367]    [Pg.97]    [Pg.421]    [Pg.422]    [Pg.272]    [Pg.201]    [Pg.84]    [Pg.701]    [Pg.227]    [Pg.51]    [Pg.215]    [Pg.423]    [Pg.419]    [Pg.442]    [Pg.450]    [Pg.491]    [Pg.494]    [Pg.500]    [Pg.275]    [Pg.4]    [Pg.8]    [Pg.32]    [Pg.689]    [Pg.672]    [Pg.679]    [Pg.5]    [Pg.33]    [Pg.205]    [Pg.180]    [Pg.253]    [Pg.185]    [Pg.73]    [Pg.80]    [Pg.704]   
See also in sourсe #XX -- [ Pg.84 ]




SEARCH



Back-propagation

Propagator method

© 2024 chempedia.info