Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network training data

Cigizoglu, H.K. and O. Kisi Flow prediction by two back propagation techniques using k-fold partitioning of neural network training data. Nordic Hydrol. 36 (2005) (inpress). [Pg.430]

In a attempt to compensate for poor long-term reproducibility in a longterm identification study, Chun et al.128 applied ANNs to PyMS spectra collected from strains of Streptomyces six times over a 20-month period. Direct comparison of the six data sets, by the conventional approach of HCA, was unsuccessful for strain identification, but a neural network trained on spectra from each of the first three data sets was able to identify isolates in those three datasets and in the three subsequent datasets. [Pg.333]

Huang and Tang49 trained a neural network with data relating to several qualities of polymer yarn and ten process parameters. They then combined this ANN with a genetic algorithm to find parameter values that optimize quality. Because the relationships between processing conditions and polymer properties are poorly understood, this combination of AI techniques is a potentially productive way to proceed. [Pg.378]

Here, neural network techniques are used to model these process-model mismatches. The neural network is fed with various input data to predict the process-model mismatch (for each state variable) at the present discrete time. The general input-output map for the neural network training can be seen in Figure 12.2. The data are fed in a moving window scheme. In this scheme, all the data are moved forward at one discrete-time interval until all of them are fed into the network. The whole batch of data is fed into the network repeatedly until the required error criterion is achieved. [Pg.369]

The demonstration has also been done by means of least-squares smoothing or by neural-network training and validation. Both of these methods have led to large numbers of forecasted data of moderate accuracy (Carlson et al. 1997 Wohlers et al. 1998 Hefferlin, Davis, and Ileto 2003). [Pg.226]

Niculescu et al. have reported the application of a related probabilistic neural net to bioactive prediction (136). These authors investigated the connection between the data preprocessing strategy and kernel choiee on the quality of the derived models. Ajay et al. also employed Bayesian methods to design a CNS-active library (97). A neural network trained using Bayesian methods was trained on CNS-aetive and CNS-inaetive data and correctly predicted up to 92% and 71% accuracy on the aetives and inactives. They used the method to generate a small library of potentially CNS-active moleeules amenable to combinatorial synthesis. [Pg.350]

The chapter presents a brief overview of the current research on V205/Ti02 catalysts for o-xylene oxidation to phthalic anhydride at Clariant. Phthalic anhydride is produced in tubular, salt-cooled reactors with a capacity of about 5 Mio to per annum. There is a rather broad variety of different process conditions realized in industry in terms of feed composition, air flow rate, as well as reactor dimensions which the phthalic anhydride catalyst portfolio has to match. Catalyst active mass compositions have been optimized at Clariant for these differently realized industry processes utilizing artificial neural networks trained on high-throughput data. Fundamental pilot reactor research unravelling new details of the reaction network of the o-xylene oxidation led to an improved kinetic reactor model which allowed further optimizing of the state of the art multi-layer catalyst system for maximum phthalic anhydride yields. [Pg.302]

In the traditional fault diagnosis of circuits based on BP neural network, voltage data of key points which can be obviously affected by faults is usually taken as input information of the network. Generally, the output of the net is the codes of failure modes. After the data samples obtained from the simulation or fault injection experiments, training and testing can be carried out (Ma et al. 2013). [Pg.858]

Artificial neural networks train best and learn to generalize best if they are presented with good examples of the classes that they are trying to model, especially if based on many examples showing variations representative of those classes the net is attempting to discriminate. Herbarium specimens can provide much data of this kind and are also a primary source of information for taxonomists. The use of neural networks as tools for herbarium systematics is, therefore, to be... [Pg.220]

Neural network classifiers. The neural network or other statistical classifiers impose strong requirements on the data and the inspection, however, when these are fulfilled then good fully automatic classification systems can be developed within a short period of time. This is for example the case if the inspection is a part of a manufacturing process, where the inspected pieces and the possible defect mechanisms are well known and the whole NDT inspection is done in repeatable conditions. In such cases it is possible to collect (or manufacture) as set of defect pieces, which can be used to obtain a training set. There are some commercially available tools (like ICEPAK [Chan, et al., 1988]) which can construct classifiers without any a-priori information, based only on the training sets of data. One has, however, always to remember about the limitations of this technique, otherwise serious misclassifications may go unnoticed. [Pg.100]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

The profits from using this approach are dear. Any neural network applied as a mapping device between independent variables and responses requires more computational time and resources than PCR or PLS. Therefore, an increase in the dimensionality of the input (characteristic) vector results in a significant increase in computation time. As our observations have shown, the same is not the case with PLS. Therefore, SVD as a data transformation technique enables one to apply as many molecular descriptors as are at one s disposal, but finally to use latent variables as an input vector of much lower dimensionality for training neural networks. Again, SVD concentrates most of the relevant information (very often about 95 %) in a few initial columns of die scores matrix. [Pg.217]

Neural networks model the functionality of the brain. They learn from examples, whereby the weights of the neurons are adapted on the basis of training data. [Pg.481]

Model building consists of three steps training, evaluation, and testing. In the ideal case the whole training data set is divided into three portions, the training, the evaluation set, and the test set. A wide variety of statistical or neural network... [Pg.490]

Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]

Neural networks can learn automatically from a data set of examples. In the case of NMR chemical shiffs, neural networks have been trained to predict the chemical shift of protons on submission of a chemical structure. Two main issues play decisive roles how a proton is represented, and which examples are in the data set. [Pg.523]

This reaction data set of 626 reactions was used as a training data set to produce a knowledge base. Before this data set is used as input to a neural Kohonen network, each reaction must be coded in the form of a vector characterizing the reaction event. Six physicochemical effects were calculated for each of five bonds at the reaction center of the starting materials by the PETRA (see Section 7.1.4) program system. As shown in Figure 10,3-3 with an example, the physicochemical effects of the two regioisomeric products arc different. [Pg.546]

VR, the inputs correspond to the value of the various parameters and the network is 1 to reproduce the experimentally determined activities. Once trained, the activity of mown compound can be predicted by presenting the network with the relevant eter values. Some encouraging results have been reported using neural networks, have also been applied to a wide range of problems such as predicting the secondary ire of proteins and interpreting NMR spectra. One of their main advantages is an to incorporate non-linearity into the model. However, they do present some problems Hack et al. 1994] for example, if there are too few data values then the network may memorise the data and have no predictive capability. Moreover, it is difficult to the importance of the individual terms, and the networks can require a considerable 1 train. [Pg.720]

Neural networks have the following advantages (/) once trained, their response to input data is extremely fast (2) they are tolerant of noisy and incomplete input data (J) they do not require knowledge engineering and can be built direcdy from example data (4) they do not require either domain models or models of problem solving and (5) they can store large amounts of information implicitly. [Pg.540]

Providing input/output data is available, a neural network may be used to model the dynamies of an unknown plant. There is no eonstraint as to whether the plant is linear or nonlinear, providing that the training data eovers the whole envelope of plant operation. [Pg.358]

Fig. 10.26 Training and trained data fora neural network model of a Ship s Hull. Fig. 10.26 Training and trained data fora neural network model of a Ship s Hull.

See other pages where Neural network training data is mentioned: [Pg.5]    [Pg.20]    [Pg.733]    [Pg.96]    [Pg.148]    [Pg.171]    [Pg.657]    [Pg.249]    [Pg.22]    [Pg.147]    [Pg.1007]    [Pg.194]    [Pg.209]    [Pg.27]    [Pg.139]    [Pg.210]    [Pg.118]    [Pg.148]    [Pg.71]    [Pg.572]    [Pg.420]    [Pg.99]    [Pg.107]    [Pg.263]    [Pg.491]    [Pg.494]    [Pg.497]    [Pg.109]    [Pg.540]    [Pg.2]   
See also in sourсe #XX -- [ Pg.95 ]




SEARCH



Data networking

Network training data

Neural network

Neural networking

Training data

Training network

Training neural network

© 2024 chempedia.info