Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network learning rate

The four experiments done previously with Rnp (= 0.5, 1, 3, 4) were used to train the neural network and the experiment with / exp = 2 was used to validate the system. Dynamic models of process-model mismatches for three state variables (i.e. X) of the system are considered here. They are the instant distillate composition (xD), accumulated distillate composition (xa) and the amount of distillate (Ha). The inputs and outputs of the network are as in Figure 12.2. A multilayered feed forward network, which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, is used in this work. The error between the actual mismatch (obtained from simulation and experiments) and that predicted by the network is used as the error signal to train the network as described earlier. [Pg.376]

To optimize the neural network design, important choices must be made for the selection of numerous parameters. Many of these are internal parameters that need to be tuned with the help of experimental results and experience with the specific application under study. The following discussion focuses on back-propagation design choices for the learning rate, momentum term, activation function, error function, initial weights, and termination condition. [Pg.92]

Kaiser D, Tmej C, Chiba P, Schaper KJ, Ecker G. Artificial neural networks in drug design II. Influence of learning rate and momentum factor on the predictive ability. Sci Pharm 2000 68 57-64. [Pg.311]

R. A. Jacobs, Neural Networks, 1, 295 (1988). Increased Rates of Convergence Through Learning Rate Adaptation. [Pg.131]

A. A. Minai and R. D. Williams, in International Joint Conference on Neural Networks, M. Caudill, Ed., Lawrence Erlbaum Associates, Hillsdale, NJ, 1990, Vol. 1, pp. 676-679. Acceleration of Backpropagation Through Learning Rate and Momentum Adaptation. [Pg.131]

The parameters of neural network model is set to Maximum number of training Max Epochs = 5000 Learning rate = 0.035 Target error Eq = 1.0 x 10 l... [Pg.454]

Luo, Z. (1991). On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward neural networks, neural computation. [Pg.163]

Table 21.4 shows the topology and the results for the five best neural networks implemented. To evaluate the accuracy of the ANN models developed, we have used the RMSEs in validation (RMSE ). As it can be observed in Table 21.4, the neural networks with lower RMSEv is, in this case, the neural network with topology 5-(4)j-l. We choose this ANN because it presents a lower RMSEv and lower APD The best topology developed, 5-(4)j-l, consists in five input neurons, one middle layer with four neurons and one output neuron in the output layer. To train the ANNj j model, a maximum number of 750 training cycles was established the learning rate was set at 0.60 and the momentum value at 0.80. [Pg.454]

Fuzzy methods are also used to enhance the learning capabilities or performance of a neural network. This can be done by using fuzzy rules to change the learning rate, or by creating a network that works with fuzzy inputs. [Pg.285]


See other pages where Neural network learning rate is mentioned: [Pg.500]    [Pg.720]    [Pg.740]    [Pg.764]    [Pg.99]    [Pg.115]    [Pg.116]    [Pg.95]    [Pg.27]    [Pg.131]    [Pg.37]    [Pg.305]    [Pg.735]    [Pg.235]    [Pg.118]    [Pg.579]    [Pg.299]    [Pg.37]    [Pg.688]    [Pg.385]    [Pg.704]    [Pg.300]    [Pg.137]    [Pg.136]    [Pg.195]    [Pg.245]    [Pg.129]    [Pg.84]    [Pg.331]    [Pg.334]    [Pg.90]    [Pg.56]    [Pg.194]    [Pg.209]    [Pg.282]    [Pg.282]    [Pg.703]    [Pg.231]    [Pg.221]   
See also in sourсe #XX -- [ Pg.365 ]




SEARCH



Learning neural network

Learning rate

Neural network

Neural networking

© 2024 chempedia.info