Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural training methods

After selection of descriptors/NN training, the best networks were applied to the prediction of 259 chemical shifts from 31 molecules (prediction set), which were not used for training. The mean absolute error obtained for the whole prediction set was 0.25 ppm, and for 90% of the cases the mean absolute error was 0.19 ppm. Some stereochemical effects could be correctly predicted. In terms of speed, the neural network method is very fast - the whole process to predict the NMR shifts of 30 protons in a molecule with 56 atoms, starting from an MDL Molfile, took less than 2 s on a common workstation. [Pg.527]

Srinivasula S, Jain A (2006) A comparative analysis of training methods for artificial neural network rainfall-runoff models. Appl Soft Comput 6 295-306... [Pg.146]

Not all neural networks are the same their connections, elemental functions, training methods and applications may differ in significant ways. The types of elements in a network and the connections between them are referred to as the network architecture. Commonly used elements in artificial neural networks will be presented in Chapter 2. The multilayer perception, one of the most commonly used architectures, is described in Chapter 3. Other architectures, such as radial basis function networks and self organizing maps (SOM) or Kohonen architectures, will be described in Chapter 4. [Pg.17]

Coupled closely with each network architecture is its training method. Training (or as it is sometimes called, learning) is a means of adjusting the connections between elements in a neural network in response to input data so that a given task can be performed. A... [Pg.17]

The idealized step function (Heaviside function) described above is only one of many functions used in basic neural network units. One problem with this function is that it is discontinuous, and does not a continuous derivative derivatives are at the heart of many training methods, a topic to be discussed later. An approximation to a step function that is easier to handle mathematically is the logistic function, shown in Figure 2.6 below. [Pg.24]

Neural network method is often quoted as a data-driven method. The weights are adjusted on the basis of data. In other words, neural networks learn from training examples and can generalize beyond the training data. Therefore, neural networks are often applied to domains where one has little or incomplete understanding of the problem to be solved, but where training data is readily available. Protein secondary structure prediction is one such example. Numerous rules and statistics have been accumulated for protein secondary structure prediction over the last two decades. Nevertheless, these... [Pg.157]

ABSTRACT The safety of oil depot is threatened by many factors and the results of safety evaluation are limited by the evaluation method, the accuracy of evaluation results also has been largely affected by personnel subjective factors. To overcome these defects, based on the analysis of influence factors of oil depot safety hierarchical structure safety evaluation model of oil depot is built by BP neural network method in this paper, and the evaluation model of neural network is trained by sample data. Evaluation results proved that BP neural network method is very suitable to evaluate the safety status of oil depot. [Pg.1205]

T. Takeo, P.K. Mahanta, Why C.T.C. tea is less fragrant Two and a Bud 30, 76-77 (1983) B.G. Kermani, S.S. Schiffman, H.T. Nagle, Performance of the Levenberg-Marquardt neural network training method in electronic nose applications. Sens. Actuators B Chem. 110, 13-22 (2005)... [Pg.114]

In the second step, each ANN so generated, is then trained, validated and tested on the dataset. This procedure is performed as follows. First, the original dataset is shuffled and partitioned into three sets (in our experiments, 60 %, 20 %, and 20% of the entire dataset, respectively). Then, the first set is used to train and validate the neural network topology, by means of a 10-fold cross validation. The training method we use in our experiments is the Resilient Propagation [20], with stop condition on a training error threshold. The second and third sets are finally used to calculate, respectively, the validation error, the test error, and the confusion matrix. This information (or, if needed, a metric of network complexity) is then fed back to the outer loop, and used to calculate the fitness functions to be optimized by the MOEA. [Pg.56]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Several nonlinear QSAR methods have been proposed in recent years. Most of these methods are based on either ANN or machine learning techniques. Both back-propagation (BP-ANN) and counterpropagation (CP-ANN) neural networks [33] were used in these studies. Because optimization of many parameters is involved in these techniques, the speed of the analysis is relatively slow. More recently, Hirst reported a simple and fast nonlinear QSAR method in which the activity surface was generated from the activities of training set compounds based on some predefined mathematical functions [34]. [Pg.313]


See other pages where Neural training methods is mentioned: [Pg.39]    [Pg.287]    [Pg.199]    [Pg.39]    [Pg.362]    [Pg.8]    [Pg.52]    [Pg.131]    [Pg.136]    [Pg.152]    [Pg.179]    [Pg.116]    [Pg.254]    [Pg.178]    [Pg.153]    [Pg.158]    [Pg.1205]    [Pg.338]    [Pg.259]    [Pg.347]    [Pg.217]    [Pg.6835]    [Pg.55]    [Pg.20]    [Pg.352]    [Pg.357]    [Pg.668]    [Pg.1300]    [Pg.263]    [Pg.491]    [Pg.494]    [Pg.497]    [Pg.500]    [Pg.503]    [Pg.530]    [Pg.115]    [Pg.267]    [Pg.9]    [Pg.203]   
See also in sourсe #XX -- [ Pg.362 ]




SEARCH



Training methods

© 2024 chempedia.info