Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training of Neural Networks

The fundamental idea behind training, for all neural network architectures, is this pick a set of weights (often randomly), apply the inputs to the network and see how the network performs with this set of weights. If it doesn t perform well, then modify the weights by some algorithm (specific to each architecture) and repeat the procedure. This iterative process is continued until some pre-specified stopping criterion has been reached. [Pg.51]

A training pass through all vectors of the input data is called an epoch. Iterative changes can be made to the weights with each input vector, or changes can be made after all input vectors have been processed. Typically, weights are iteratively modified by epochs. [Pg.51]


M. Bos and H. T. Weber, Comparison of the training of neural networks for quantitative X-ray fluorescence spectrometry by a genetic algorithm and backward error propagation. Anal. Chim. Acta, 247(1), 1991, 97-105. [Pg.282]

Controller emulation A simple applieation in eontrol is the use of neural networks to emulate the operation of existing eontrollers. It may be that a nonlinear plant requires several tuned PID eontrollers to operate over the full range of eontrol aetions. Or again, an LQ optimal eontroller has diffieulty in running in real-time. Figure 10.28 shows how the eontrol signal from an existing eontroller may be used to train, and to finally be replaeed by, a neural network eontroller. [Pg.361]

All elosed-loop eontrol systems operate by measuring the error between desired inputs and aetual outputs. This does not, in itself, generate eontrol aetion errors that may be baek-propagated to train a neural network eontroller. If, however, a neural network of the plant exists, baek-propagation through this network of the system error (r(lcT) — y(lcT)) will provide the neeessary eontrol aetion errors to train the neural network eontroller as shown in Figure 10.29. [Pg.361]

The ANN as a predictive tool is most effective only within the trained range of input training variables. Those predictions that fall outside the trained range must be considered to be of questionable validity. Even so, whenever experimental data are available for validation, neural networks can be put to effective use. Since an extensive experimental body of data on polymers has been published in the literature, the application of neural networks as a predictive tool for physical, thermodynamic, and other fluid properties is, therefore, promising. It is a novel technique that will continue to be used, and it deserves additional investigation and development. [Pg.32]

Two models of practical interest using quantum chemical parameters were developed by Clark et al. [26, 27]. Both studies were based on 1085 molecules and 36 descriptors calculated with the AMI method following structure optimization and electron density calculation. An initial set of descriptors was selected with a multiple linear regression model and further optimized by trial-and-error variation. The second study calculated a standard error of 0.56 for 1085 compounds and it also estimated the reliability of neural network prediction by analysis of the standard deviation error for an ensemble of 11 networks trained on different randomly selected subsets of the initial training set [27]. [Pg.385]

Especially the last few years, the number of applications of neural networks has grown exponentially. One reason for this is undoubtedly the fact that neural networks outperform in many applications the traditional (linear) techniques. The large number of samples that are needed to train neural networks remains certainly a serious bottleneck. The validation of the results is a further issue for concern. [Pg.680]

As an example of importance-weighting ideas, consider the situation that the actual interest is in hydration free energies of a distinct conformational states of a complex solute. Is there a good reference system to use to get comparative thermodynamic properties for all conformers There is a theoretical answer that is analogous to the Hebb training rule of neural networks [36, 37], and generalizes a procedure of [21]... [Pg.334]

Huang and Tang49 trained a neural network with data relating to several qualities of polymer yarn and ten process parameters. They then combined this ANN with a genetic algorithm to find parameter values that optimize quality. Because the relationships between processing conditions and polymer properties are poorly understood, this combination of AI techniques is a potentially productive way to proceed. [Pg.378]

Another method was proposed by Diederichs et al. (1998). This method is very simple in the sense that it trains a neural network using amino acid sequences as inputs and the z coordinate of Ca atoms in a coordinate frame with the outer membrane in the xy plane, as outputs. [Pg.297]

A set of neural networks has been trained to identify seven classes of petroleum hydrocarbon based fuels from their fluorescence emission spectra this technique correctly identified at least 90% of the test spectra (Andrews and Lieberman 1994). [Pg.155]

The C chemical shifts of 29 alkyl (Me,Et) substituted oxanes (830MR94) were used to train a neural network to simulate the C NMR spectra. The neural network, thus trained, was employed to simulate the C NMR spectra of 2-Et, franj-3,5-di-Me-, and 2,2,6-tri-Me-oxanes, respectively, compounds that exist >95% in one preferred chair conformation. In one case, the deviation for one methyl substituent proved to be considerable and was related to other conformers participating in the conformational equilibrium (94ACA221). [Pg.229]

Jouyban et al. (2004) applied ANN to calculate the solubility of drugs in water-cosolvent mixtures, using 35 experimental datasets. The networks employed were feedforward back-propagation errors with one hidden layer. The topology of neural network was optimized in a 6-5-1 architecture. All data points in each set were used to train the ANN and the solubilities were back-calculated employing the trained networks. The difference between calculated solubilities and experimental... [Pg.55]


See other pages where Training of Neural Networks is mentioned: [Pg.51]    [Pg.53]    [Pg.55]    [Pg.57]    [Pg.59]    [Pg.61]    [Pg.63]    [Pg.118]    [Pg.1787]    [Pg.234]    [Pg.34]    [Pg.536]    [Pg.538]    [Pg.51]    [Pg.53]    [Pg.55]    [Pg.57]    [Pg.59]    [Pg.61]    [Pg.63]    [Pg.118]    [Pg.1787]    [Pg.234]    [Pg.34]    [Pg.536]    [Pg.538]    [Pg.263]    [Pg.4]    [Pg.652]    [Pg.119]    [Pg.9]    [Pg.454]    [Pg.367]    [Pg.373]    [Pg.379]    [Pg.287]    [Pg.474]    [Pg.543]    [Pg.230]    [Pg.708]    [Pg.137]    [Pg.140]    [Pg.37]    [Pg.275]    [Pg.194]    [Pg.196]    [Pg.198]    [Pg.123]   
See also in sourсe #XX -- [ Pg.89 ]




SEARCH



Neural network

Neural networking

The Training of Artificial Neural Networks

Training network

Training neural network

© 2024 chempedia.info