Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training network

The Kohonen Self-Organizing Maps can be used in a. similar manner. Suppose Xj., k = 1,. Nis the set of input (characteristic) vectors, Wy, 1 = 1,. l,j = 1,. J is that of the trained network, for each (i,j) cell of the map N is the number of objects in the training set, and 1 and j are the dimensionalities of the map. Now, we can compare each with the Wy of the particular cell to which the object was allocated. This procedure will enable us to detect the maximal (e max) minimal ( min) errors of fitting. Hence, if the error calculated in the way just mentioned above is beyond the range between e and the object probably does not belong to the training population. [Pg.223]

The trained counterpvopagation network is then able to predict the spectrum for a new structure when operating as a look-up table (see Figure 10.2-9) the encoded query or th.c input structure is input into the trained network and the winning neuron is determined by considering just the upper part of the network. The neuron points to the corresponding neuron in the lower part of the network, which then provides the simulated IR spectrum. [Pg.532]

Figure 10.3-4. Trained networks of 626 pyrazole building reactions, a) The trained network after classifying all 626 reactions, Light gray neurons that have received known reactions black neurons that have received reactions considered to have a low yield dark gray conflict neurons obtaining both types of reactions white empty neurons that have not received a reaction at all b) Each conflict neuron is assigned to the most populated class within it, and each empty neuron to the most populated class in its neighborhood. Figure 10.3-4. Trained networks of 626 pyrazole building reactions, a) The trained network after classifying all 626 reactions, Light gray neurons that have received known reactions black neurons that have received reactions considered to have a low yield dark gray conflict neurons obtaining both types of reactions white empty neurons that have not received a reaction at all b) Each conflict neuron is assigned to the most populated class within it, and each empty neuron to the most populated class in its neighborhood.
For both reactions leading to the two different regioisomers the corresponding vector was calculated and provided as a test case to the trained network of 626... [Pg.547]

Finally, any training is incomplete without proper validation of the trained model. Therefore, the trained network should be tested with data that it has not seen during the training. This procedure was followed in this study by first training the network on one data set, and then testing it on a second different data set. [Pg.8]

Divide the available data into training and test data sets (1/3). Test sets are used to validate the trained network and insure accurate generalization. [Pg.8]

The experimental specific volume data were available in the temperature range of 273K to 353K, with 20K increments. The nine types of siloxanes were arbitrarily divided into two groups, one each for training and testing. The compounds 1, 2, 4, 6, and 8 were utilized in the training phase. The trained network was then... [Pg.11]

For all the siloxanes the network was trained at two temperature levels 25°C and 75°C. The trained network was then tested for its viscosity predictions at 50°C. The network training and testing results are shown in Fig. 8. The rms error for this prediction was 0.002. [Pg.12]

A list of the systems investigated in this work is presented in Tables 8-10. These systems represent 4 nonpolar binaries, 8 nonpolar/polar binaries, and 9 polar binaries. These binary systems were recognized by Heil and Prausnitz [55] as those which had been well studied for a wide range of concentrations. With well-documented behavior they represent a severe test for any proposed model. The experimental data used in this work have been obtained from the work of Alessandro [53]. The experimental data were arbitrarily divided into two data sets one for use in training the proposed neural network model and the remainder for validating the trained network. [Pg.20]

As the network learns, connection weights are adjusted so that the network can model general rules that underlie the data. If there are some general rules that apply to a large proportion of the patterns in the dataset, the network will repeatedly see examples of these rules and they will be the first to be learned. Subsequently, it will turn its attention to more specialized rules of which there are fewer examples in the dataset. Once it has learned these rules as well, if training is allowed to continue, the network may start to learn specific samples within the data. This is undesirable for two reasons. Firstly, since these particular patterns may never be seen when the trained network is put to use, any time spent learning them is wasted. Secondly,... [Pg.38]

This method for preventing overfitting requires that there are enough samples so that both training and test sets are representative of the dataset. In fact, it is desirable to have a third set known as a validation set, which acts as a secondary test of the quality of the network. The reason is that, although the test set is not used to train the network, it is nevertheless used to determine at what point training is stopped, so to this extent the form of the trained network is not completely independent of the test set. [Pg.39]

Additionally, neural networks are an interesting approach for system optimization still one has to take into account that (1) the training phase requires a certain amount of time and experience (both over- and under-trained networks will tend to give false readouts) and (2) generally the data... [Pg.379]

Research Training Networks (RTN) on Hydrogen Storage under the Marie Curie Programme ... [Pg.10]

The authors thank Osaka Gas Co., Ltd. for supplying the two Donacarbo samples and the European Union (Marie Curie Research Training Network - HyTRAIN - Project referenced 12443), MEC (Accion complementaria ENE2005-23824-E/CON), the Generalitat Valenciana (Accion complementaria ACOMP06/089) and MEC-CTQ2006-08958/PPQ for financial help. [Pg.75]

We acknowledge the support from the European Research and Training Network Understanding Nanomaterials from a Quantum Perspective (NANOQUANT), contract No. MRTN-CT-2003-506842, and from the Carlshergfondet. This work was also partly supported hy the Danish Natural Research Council (Grant No. 21-02-0467). We also acknowledge the support from the Danish Center for Scientific Computing (DCSC). [Pg.23]

Detection dogs, manual and mechanical demining, robotics, custom-designed machinery, many photos, training, networking, demining projects all over Africa. [Pg.314]

Jouyban et al. (2004) applied ANN to calculate the solubility of drugs in water-cosolvent mixtures, using 35 experimental datasets. The networks employed were feedforward back-propagation errors with one hidden layer. The topology of neural network was optimized in a 6-5-1 architecture. All data points in each set were used to train the ANN and the solubilities were back-calculated employing the trained networks. The difference between calculated solubilities and experimental... [Pg.55]

Dynamic sets of process-model mismatches data is generated for a wide range of the optimisation variables (z). These data are then used to train the neural network. The trained network predicts the process-model mismatches for any set of values of z at discrete-time intervals. During the solution of the dynamic optimisation problem, the model has to be integrated many times, each time using a different set of z. The estimated process-model mismatch profiles at discrete-time intervals are then added to the simple dynamic model during the optimisation process. To achieve this, the discrete process-model mismatches are converted to continuous function of time using linear interpolation technique so that they can easily be added to the model (to make the hybrid model) within the optimisation routine. One of the important features of the framework is that it allows the use of discrete process data in a continuous model to predict discrete and/or continuous mismatch profiles. [Pg.371]

I would like to thank Bogumil Jeziorski for reading and commenting on the manuscript, Edyta Malolepsza and Konrad Piszczatowski for their invaluable help at all stages of this work, and Marek Orzechowski for useful discussions on the empirical force fields and computer simulations techniques. I am also indebted to Drs. Robert Bukowski, A. Robert W. McKellar, and Konrad Patkowski for providing me with the figures, and with their results prior to publication. This work was supported by the European Research Training Network Molecular Universe (contract no. MRTN-CT-2004-512302). [Pg.130]


See other pages where Training network is mentioned: [Pg.427]    [Pg.494]    [Pg.547]    [Pg.547]    [Pg.548]    [Pg.5]    [Pg.17]    [Pg.31]    [Pg.116]    [Pg.377]    [Pg.115]    [Pg.40]    [Pg.43]    [Pg.374]    [Pg.10]    [Pg.16]    [Pg.138]    [Pg.988]    [Pg.276]    [Pg.988]    [Pg.260]    [Pg.160]    [Pg.295]    [Pg.153]    [Pg.177]    [Pg.60]    [Pg.295]    [Pg.18]    [Pg.60]    [Pg.94]    [Pg.95]    [Pg.111]   
See also in sourсe #XX -- [ Pg.458 ]




SEARCH



Artificial neural networks based models training

Artificial neural networks training

Identification neural network training

Investigation of Trained Network

Network training data

Neural network algorithm training process

Neural network training data

Neural network training processes

Radial basis function network training

Software for Training Neural Networks

The Training of Artificial Neural Networks

Training Kohonen neural networks

Training a Layered Network Backpropagation

Training a network

Training a neural network

Training an Artificial Neural Network

Training counterpropagation neural network

Training neural network

Training the Network

Training, of neural networks

© 2024 chempedia.info