Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural networks learning rule

Neural network learning algorithms BP = Back-Propagation Delta = Delta Rule QP = Quick-Propagation RP = Rprop ART = Adaptive Resonance Theory, CP = Counter-Propagation. [Pg.104]

Neural network method is often quoted as a data-driven method. The weights are adjusted on the basis of data. In other words, neural networks learn from training examples and can generalize beyond the training data. Therefore, neural networks are often applied to domains where one has little or incomplete understanding of the problem to be solved, but where training data is readily available. Protein secondary structure prediction is one such example. Numerous rules and statistics have been accumulated for protein secondary structure prediction over the last two decades. Nevertheless, these... [Pg.157]

Stopping criteria. A rule used to terminate the iterative training process for neural network learning or function minimization. To prevent overtraining, the stopping criteria may not be based solely upon the error function for example performance on a validation set is often used to stop training. [Pg.188]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

Three commonly used ANN methods for classification are the perceptron network, the probabilistic neural network, and the learning vector quantization (LVQ) networks. Details on these methods can be found in several references.57,58 Only an overview of them will be presented here. In all cases, one can use all available X-variables, a selected subset of X-variables, or a set of compressed variables (e.g. PCs from PCA) as inputs to the network. Like quantitative neural networks, the network parameters are estimated by applying a learning rule to a series of samples of known class, the details of which will not be discussed here. [Pg.296]

A separate class of experimental evaluation methods uses biological mechanisms. An artificial neural net (ANN) copies the process in the brain, especially its layered structure and its network of synapses. On a very basic level such a network can learn rules, for example, the relations between activity and component ratio or process parameters. An evolutionary strategy has been proposed by Miro-datos et al. [97] (see also Chapter 10 for related work). They combined a genetic algorithm with a knowledge-based system and added descriptors such as the catalyst pore size, the atomic or crystal ionic radius and electronegativity. This strategy enabled a reduction of the number of materials necessary for a study. [Pg.123]


See other pages where Neural networks learning rule is mentioned: [Pg.92]    [Pg.498]    [Pg.362]    [Pg.498]    [Pg.135]    [Pg.218]    [Pg.122]    [Pg.100]    [Pg.508]    [Pg.343]    [Pg.318]    [Pg.575]    [Pg.720]    [Pg.5]    [Pg.691]    [Pg.650]    [Pg.652]    [Pg.662]    [Pg.266]    [Pg.372]    [Pg.373]    [Pg.378]    [Pg.379]    [Pg.380]    [Pg.385]    [Pg.111]    [Pg.112]    [Pg.535]    [Pg.160]    [Pg.67]    [Pg.123]    [Pg.484]    [Pg.484]    [Pg.346]    [Pg.173]    [Pg.165]    [Pg.182]    [Pg.50]    [Pg.2404]    [Pg.3650]    [Pg.152]   
See also in sourсe #XX -- [ Pg.522 , Pg.534 ]




SEARCH



Learning neural network

Neural network

Neural networking

© 2024 chempedia.info