Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Artificial neural networks testing

Terry, P.A. and D.M. Himmelhlau, Data Rectification and Gross Error Detection in a Steady-State Process via Artificial Neural Networks, Indushial and Engineeiing Chemistiy Reseaieh, 32, 199.3,. 3020-3028. (Neural networks, measurement test)... [Pg.2545]

Aqueous solubility is selected to demonstrate the E-state application in QSPR studies. Huuskonen et al. modeled the aqueous solubihty of 734 diverse organic compounds with multiple linear regression (MLR) and artificial neural network (ANN) approaches [27]. The set of structural descriptors comprised 31 E-state atomic indices, and three indicator variables for pyridine, ahphatic hydrocarbons and aromatic hydrocarbons, respectively. The dataset of734 chemicals was divided into a training set ( =675), a vahdation set (n=38) and a test set (n=21). A comparison of the MLR results (training, r =0.94, s=0.58 vahdation r =0.84, s=0.67 test, r =0.80, s=0.87) and the ANN results (training, r =0.96, s=0.51 vahdation r =0.85, s=0.62 tesL r =0.84, s=0.75) indicates a smah improvement for the neural network model with five hidden neurons. These QSPR models may be used for a fast and rehable computahon of the aqueous solubihty for diverse orgarhc compounds. [Pg.93]

Chen et al. (2008) employed a commercial electronic tongue, based on an array of seven sensors, to classify 80 green tea samples on the basis of their taste grade, which is usually assessed by a panel test. PCA was employed as an explorative tool, while fc-NN and a back propagation artificial neural network (BP-ANN) were used for supervised classification. Both the techniques provide excellent results, achieving 100% prediction ability on a test set composed of 40 samples (one-half of the total number). In cases like this, when a simple technique, such as fc-NN, is able to supply excellent outcomes, the utilization of a complex technique, like BP-ANN, does not appear justified from a practical point of view. [Pg.105]

In the rough experimental space the distance between discrete levels of the experimental variables is relatively large. After testing three - four catalyst generations different Data Processing methods, such as general statistical approaches or Artificial Neural Networks (ANNs), can be applied to determine the contribution of each variable into the overall performance or establish the Activity - Composition Relationship (ACR). [Pg.304]

How can we fine-tune the model Well, remember that we have prepared a different set of samples to validate the model. You can use this set to evaluate which model predicts best. If you have a sufficient number of samples, it would be even better to prepare a small testing or "control set (different from the calibration and validation sets and without a too large number of samples) to visualise which model seems best and, then, proceed with the validation set. This strategy has to be applied when developing artificial neural networks (see Chapter 5). [Pg.205]

An artificial neural network (ANN) model was developed to predict the structure of the mesoporous materials based on the composition of their synthesis mixtures. The predictive ability of the networks was tested through comparison of the mesophase structures predicted by the model and those actually determined by XRD. Among the various ANN models available, three-layer feed-forward neural networks with one hidden layer are known to be universal approximators [11, 12]. The neural network retained in this work is described by the following set of equations that correlate the network output S (currently, the structure of the material) to the input variables U, which represent here the normalized composition of the synthesis mixture ... [Pg.872]

Cherkasov, 2005 a (79) for descriptors only Artificial neural networks (ANN)3 44 (77) Random peptides chosen according to two amino acid frequency distributions Sets A and B contained 933 and 500 peptides, respectively (see text for details, unpublished data) Training and validation within one set, independent testing on second set 1433 Set A models predicted activity with up to 83% accuracy on Set B Set B models predicted up to 43% accuracy on Set A (see text for details) nd... [Pg.146]

Artificial neural networks (ANN s) were effectively set aside for 15 years after a 1969 study by Minksy and Papert demonstrated their failure to correctly model a simple exclusive OR (XOR) function. i The XOR function describes the result of an operation involving two bits (1 or 0). A simple OR function produces a value of 1 if either bit or both bits have a value of 1. The XOR differs from an OR function in the output of an operation on two bits of value 1. The XOR function will yield a 0 while the OR function will yield a 1. Interest in ANN s resumed in the 1980 s after modifications were made to the layering of their neurons that allowed them to overcome the XOR test as well as a wide variety of other non-linear modeling challenges. [Pg.368]

A diverse set of 4173 compounds was used by Karthikeyan et al. [133] to derive their models with a large number of 2D and 3D descriptors (ah calculated by MOE following 3D-structure generation of Concord [134]) and an artificial neural network. The authors found that 2D descriptors provided better prediction accuracy (RMSE = 48-50°C) compared to models developed using 3D indexes (RMSE = 55-56°C) for both training and test sets. The use of a combined 2D and 3D dataset did not improve the results. [Pg.262]

Vitamins Bi, B2, B6 Tablet dissolution tests Fluorimetry Not relevant 135 Chemometrics involving an artificial neural networks calibration/zone stopping at the detector [156]... [Pg.277]

In parallel to the SUBSTRUCT analysis, a three-layered artificial neural network was trained to classify CNS-i- and CNS- compounds. As mentioned previously, for any classification the descriptor selection is a cmcial step. Chose and Crippen published a compilation of 120 different descriptors, which were used to calculate AlogP values as weU as drug-likeness [53, 54]. Here, 92 of the 120 descriptors and the same datasets for training and tests as for the SUBSTRUCT algorithm were used. The network consisted of 92 input neurons, five hidden neurons, and one output neuron. [Pg.1794]

Application of artificial neural networks (ANN) for modelling of the kinetics of a catalytic hydrogenation reaction in a gas-liquid-solid system has been studied and discussed. The kinetics of the hydrogenation of 2,4-DNT over a palladium on alumina catalyst has been described with feedforward neural networks of dififerent architectures. A simple experimental procedure to supply learning data has been proposed. The accuracy and flexibility of the hybrid first principles-neural network model have been tested and compared with those of the classical model. [Pg.379]


See other pages where Artificial neural networks testing is mentioned: [Pg.263]    [Pg.500]    [Pg.650]    [Pg.115]    [Pg.267]    [Pg.267]    [Pg.178]    [Pg.181]    [Pg.184]    [Pg.312]    [Pg.256]    [Pg.45]    [Pg.468]    [Pg.322]    [Pg.175]    [Pg.123]    [Pg.260]    [Pg.69]    [Pg.245]    [Pg.107]    [Pg.269]    [Pg.134]    [Pg.301]    [Pg.82]    [Pg.416]    [Pg.977]    [Pg.205]    [Pg.218]    [Pg.111]    [Pg.230]    [Pg.1779]    [Pg.1023]    [Pg.227]    [Pg.317]    [Pg.332]   
See also in sourсe #XX -- [ Pg.349 ]




SEARCH



Artificial Neural Network

Artificial network

Neural artificial

Neural network

Neural networking

© 2024 chempedia.info