Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Standard back-propagation

In a standard back-propagation scheme, updating the weights is done iteratively. The weights for each connection are initially randomized when the neural network undergoes training. Then the error between the target output and the network predicted output are back-propa-... [Pg.7]

With standard back propagation, the most common type of neural network, there are a number of input nodes (equal to the number of inputs), each connected to every node of a hidden layer, which are in turn each connected to the output node(s) (Fig. 10.11). Each node in the input layer brings into the network the value of one independent variable. The hidden layer nodes (called hidden because they are hidden from the outside world) do most of the work (Smith, 1993). Each output node passes a single dependent variable out of the network. [Pg.355]

Figure 10.11 Structure of a standard back-propagation 8-3-1 neural network. Figure 10.11 Structure of a standard back-propagation 8-3-1 neural network.
However, like other regression methods, standard back-propagation neural nets are still prone to overtraining, overfitting, and validation problems. They introduce an additional problem related to overfitting—the need to optimize the neural network architecture. We summarize a number of developments in neural nets, from our work and that of others, which have overcome these shortcomings and allow neural networks to develop very robust models for use in combinatorial discovery. [Pg.331]

Research in neural methods, and their applications to chemistry is an active area. Techniques have been devised that overcome the weaknesses of standard back-propagation neural nets, and novel neural net architectures have been devised that have not yet been applied to combinatorial discovery and bioactive lead development. Another area of active research is in the discovery of better molecular representations that more accurately capture molecular properties... [Pg.346]

Kyngas and Valjakka have developed an evolutionary neural network (ENN) for modeling multifactor data (145). ENNs can remove insignificant descriptors, choose the size of the hidden layer, and fine tune the parameters needed in training the network. They found that evolutionary neural networks gave more accurate predictions than statistical methods and standard back-propagation neural networks. [Pg.353]

After the specific initial structure of the neural network is determined it still needs to be trained to learn the process. Different methods of training exist, with the standard back propagation algorithm (Rttmelhart et al, 1986) being the most popular. This algorithm will be explained in the next paragraph. [Pg.363]

Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]

B. Standard Error-Back-Propagation Training Routine... [Pg.7]

The overall process is repeated many times until the ANN converges to a point where the weights do not change and a model is obtained that predicts (classifies) correctly all the standards [38]. Frequently, convergence is not reached and a preset threshold error is allowed for. Figure 5.5 presents a general scheme with some more details of the back-propagation process. [Pg.260]

The earliest neural network attempt for protein tertiary structure prediction was done by Bohr et al. (1990). They predicted the binary distance constraints for the C-a atoms in protein backbone using a standard three-layer back-propagation network and BIN20 sequence encoding method for 61-amino acid windows. The output layer had 33 units, three for the 3-state secondary structure prediction, and the remaining to measure the distance constraints between the central amino acid and the 30 preceding residues. [Pg.121]

The process of extracting rules from a trained network can be made much easier if the complexity of the network has first been reduced. Furthermore, it is expected that fewer connections will result in more concise rules. Setiono (1997a) described an algorithm for extracting rules from a pruned network. The network was a standard three-layer feedforward back-propagation network trained with a pre-specified accuracy rate. The pruning process attempted to eliminate as many connections as possible while maintaining the accuracy rate. [Pg.152]


See other pages where Standard back-propagation is mentioned: [Pg.500]    [Pg.205]    [Pg.205]    [Pg.92]    [Pg.122]    [Pg.329]    [Pg.331]    [Pg.334]    [Pg.183]    [Pg.703]    [Pg.335]    [Pg.356]    [Pg.364]    [Pg.365]    [Pg.162]    [Pg.148]    [Pg.500]    [Pg.205]    [Pg.205]    [Pg.92]    [Pg.122]    [Pg.329]    [Pg.331]    [Pg.334]    [Pg.183]    [Pg.703]    [Pg.335]    [Pg.356]    [Pg.364]    [Pg.365]    [Pg.162]    [Pg.148]    [Pg.494]    [Pg.704]    [Pg.159]    [Pg.56]    [Pg.109]    [Pg.136]    [Pg.82]    [Pg.331]    [Pg.259]    [Pg.350]    [Pg.352]    [Pg.144]    [Pg.35]    [Pg.121]    [Pg.181]    [Pg.2277]    [Pg.66]    [Pg.85]    [Pg.47]   
See also in sourсe #XX -- [ Pg.183 ]




SEARCH



Back-propagation

© 2024 chempedia.info