Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Artificial neural network training

Artichoke (Cynara scolymus L.) 365 Artificial neural networks training of 760, e311... [Pg.960]

M-CASE/BAIA (see text). BP-ANN = three-layer feedforward artificial neural network trained by the backpropagation algorithm, PAAN = probabilistic artificial neural network, CPANN = counterpropagation artificial neural network. [Pg.662]

The chapter presents a brief overview of the current research on V205/Ti02 catalysts for o-xylene oxidation to phthalic anhydride at Clariant. Phthalic anhydride is produced in tubular, salt-cooled reactors with a capacity of about 5 Mio to per annum. There is a rather broad variety of different process conditions realized in industry in terms of feed composition, air flow rate, as well as reactor dimensions which the phthalic anhydride catalyst portfolio has to match. Catalyst active mass compositions have been optimized at Clariant for these differently realized industry processes utilizing artificial neural networks trained on high-throughput data. Fundamental pilot reactor research unravelling new details of the reaction network of the o-xylene oxidation led to an improved kinetic reactor model which allowed further optimizing of the state of the art multi-layer catalyst system for maximum phthalic anhydride yields. [Pg.302]

Artificial neural networks train best and learn to generalize best if they are presented with good examples of the classes that they are trying to model, especially if based on many examples showing variations representative of those classes the net is attempting to discriminate. Herbarium specimens can provide much data of this kind and are also a primary source of information for taxonomists. The use of neural networks as tools for herbarium systematics is, therefore, to be... [Pg.220]

Meissner, M., Schmuker, M., Schneider, G. Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinform. 7, 125 (2006)... [Pg.14]

Recently, a new approach called artificial neural networks (ANNs) is assisting engineers and scientists in their assessment of fuzzy information, Polymer scientists often face a situation where the rules governing the particular system are unknown or difficult to use. It also frequently becomes an arduous task to develop functional forms/empirical equations to describe a phenomena. Most of these complexities can be overcome with an ANN approach because of its ability to build an internal model based solely on the exposure in a training environment. Fault tolerance of ANNs has been found to be very advantageous in physical property predictions of polymers. This chapter presents a few such cases where the authors have successfully implemented an ANN-based approach for purpose of empirical modeling. These are not exhaustive by any means. [Pg.1]

Even so, artificial neural networks exhibit many brainlike characteristics. For example, during training, neural networks may construct an internal mapping/ model of an external system. Thus, they are assumed to make sense of the problems that they are presented. As with any construction of a robust internal model, the external system presented to the network must contain meaningful information. In general the following anthropomorphic perspectives can be maintained while preparing the data ... [Pg.8]

Srinivasula S, Jain A (2006) A comparative analysis of training methods for artificial neural network rainfall-runoff models. Appl Soft Comput 6 295-306... [Pg.146]

Aqueous solubility is selected to demonstrate the E-state application in QSPR studies. Huuskonen et al. modeled the aqueous solubihty of 734 diverse organic compounds with multiple linear regression (MLR) and artificial neural network (ANN) approaches [27]. The set of structural descriptors comprised 31 E-state atomic indices, and three indicator variables for pyridine, ahphatic hydrocarbons and aromatic hydrocarbons, respectively. The dataset of734 chemicals was divided into a training set ( =675), a vahdation set (n=38) and a test set (n=21). A comparison of the MLR results (training, r =0.94, s=0.58 vahdation r =0.84, s=0.67 test, r =0.80, s=0.87) and the ANN results (training, r =0.96, s=0.51 vahdation r =0.85, s=0.62 tesL r =0.84, s=0.75) indicates a smah improvement for the neural network model with five hidden neurons. These QSPR models may be used for a fast and rehable computahon of the aqueous solubihty for diverse orgarhc compounds. [Pg.93]

Overfitting arises when the network learns for too long. For most students, the longer they are trained the more they learn, but artificial neural networks are different. Since networks grow neither bored nor tired, it is a little odd that their performance can begin to degrade if training is excessive. To understand this apparent paradox, we need to consider how a neural network learns. [Pg.37]

A classical Hansch approach and an artificial neural networks approach were applied to a training set of 32 substituted phenylpiperazines characterized by their affinity for the 5-HTiA-R and the generic arAR [91]. The study was aimed at evaluating the structural requirements for the 5-HTiA/ai selectivity. Each chemical structure was described by six physicochemical parameters and three indicator variables. As electronic descriptors, the field and resonance constants of Swain and Lupton were used. Furthermore, the vdW volumes were employed as steric parameters. The hydrophobic effects exerted by the ortho- and meta-substituents were measured by using the Hansch 7t-ortho and n-meta constants [91]. The resulting models provided a significant correlation of electronic, steric and hydro-phobic parameters with the biological affinities. Moreover, it was inferred that the... [Pg.169]

Fig. 7. Artificial neural network model. Bioactivities and descriptor values are the input and a final model is the output. Numerical values enter through the input layer, pass through the neurons, and are transformed into output values the connections (arrows) are the numerical weights. As the model is trained on the Training Set, the system-dependent variables of the neurons and the weights are determined. Fig. 7. Artificial neural network model. Bioactivities and descriptor values are the input and a final model is the output. Numerical values enter through the input layer, pass through the neurons, and are transformed into output values the connections (arrows) are the numerical weights. As the model is trained on the Training Set, the system-dependent variables of the neurons and the weights are determined.

See other pages where Artificial neural network training is mentioned: [Pg.399]    [Pg.147]    [Pg.364]    [Pg.548]    [Pg.272]    [Pg.399]    [Pg.147]    [Pg.364]    [Pg.548]    [Pg.272]    [Pg.105]    [Pg.263]    [Pg.455]    [Pg.500]    [Pg.652]    [Pg.662]    [Pg.115]    [Pg.9]    [Pg.22]    [Pg.27]    [Pg.232]    [Pg.266]    [Pg.199]    [Pg.483]    [Pg.268]    [Pg.180]    [Pg.367]    [Pg.257]    [Pg.474]    [Pg.705]    [Pg.467]    [Pg.43]    [Pg.158]    [Pg.178]    [Pg.178]    [Pg.179]    [Pg.181]    [Pg.184]    [Pg.312]    [Pg.269]    [Pg.465]    [Pg.254]    [Pg.259]   
See also in sourсe #XX -- [ Pg.349 ]




SEARCH



Artificial Neural Network

Artificial network

Artificial neural networks based models training

Neural artificial

Neural network

Neural networking

The Training of Artificial Neural Networks

Training an Artificial Neural Network

Training network

Training neural network

© 2024 chempedia.info