Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network training processes

Figure 6 Schematic of a typical neural network training process. I-input layer H-hidden layer 0-output layer B-bias neuron. Figure 6 Schematic of a typical neural network training process. I-input layer H-hidden layer 0-output layer B-bias neuron.
Here, neural network techniques are used to model these process-model mismatches. The neural network is fed with various input data to predict the process-model mismatch (for each state variable) at the present discrete time. The general input-output map for the neural network training can be seen in Figure 12.2. The data are fed in a moving window scheme. In this scheme, all the data are moved forward at one discrete-time interval until all of them are fed into the network. The whole batch of data is fed into the network repeatedly until the required error criterion is achieved. [Pg.369]

Lennox, B. Montague, G.A. Frith, A.M. Gent, C. Bevan, V. Industrial apphcations of neural networks—an investigation. J. Process Control. 2001, 11, 497-507. Murtoniemi, E. Merkku, P. Yliruusi, J. Comparison of four different neural network training algorithms in modelling the fluidized bed granulation process. Lab. Microcom-put. 1993, 12, 69-76. [Pg.2412]

Neural networks are processing systems that work by feeding in some variables and get an output as response to these inputs. The accuracy of the desired output depends on how well the network learned the input-output relationship during training. [Pg.145]

The chapter presents a brief overview of the current research on V205/Ti02 catalysts for o-xylene oxidation to phthalic anhydride at Clariant. Phthalic anhydride is produced in tubular, salt-cooled reactors with a capacity of about 5 Mio to per annum. There is a rather broad variety of different process conditions realized in industry in terms of feed composition, air flow rate, as well as reactor dimensions which the phthalic anhydride catalyst portfolio has to match. Catalyst active mass compositions have been optimized at Clariant for these differently realized industry processes utilizing artificial neural networks trained on high-throughput data. Fundamental pilot reactor research unravelling new details of the reaction network of the o-xylene oxidation led to an improved kinetic reactor model which allowed further optimizing of the state of the art multi-layer catalyst system for maximum phthalic anhydride yields. [Pg.302]

Neural network classifiers. The neural network or other statistical classifiers impose strong requirements on the data and the inspection, however, when these are fulfilled then good fully automatic classification systems can be developed within a short period of time. This is for example the case if the inspection is a part of a manufacturing process, where the inspected pieces and the possible defect mechanisms are well known and the whole NDT inspection is done in repeatable conditions. In such cases it is possible to collect (or manufacture) as set of defect pieces, which can be used to obtain a training set. There are some commercially available tools (like ICEPAK [Chan, et al., 1988]) which can construct classifiers without any a-priori information, based only on the training sets of data. One has, however, always to remember about the limitations of this technique, otherwise serious misclassifications may go unnoticed. [Pg.100]

After selection of descriptors/NN training, the best networks were applied to the prediction of 259 chemical shifts from 31 molecules (prediction set), which were not used for training. The mean absolute error obtained for the whole prediction set was 0.25 ppm, and for 90% of the cases the mean absolute error was 0.19 ppm. Some stereochemical effects could be correctly predicted. In terms of speed, the neural network method is very fast - the whole process to predict the NMR shifts of 30 protons in a molecule with 56 atoms, starting from an MDL Molfile, took less than 2 s on a common workstation. [Pg.527]

The ability of an ANN to learn is its greatest asset. When, as is usually the case, we cannot determine the connection weights by hand, the neural network can do the job itself. In an iterative process, the network is shown a sample pattern, such as the X, Y coordinates of a point, and uses the pattern to calculate its output it then compares its own output with the correct output for the sample pattern, and, unless its output is perfect, makes small adjustments to the connection weights to improve its performance. The training process is shown in Figure 2.13. [Pg.21]

Huang and Tang49 trained a neural network with data relating to several qualities of polymer yarn and ten process parameters. They then combined this ANN with a genetic algorithm to find parameter values that optimize quality. Because the relationships between processing conditions and polymer properties are poorly understood, this combination of AI techniques is a potentially productive way to proceed. [Pg.378]


See other pages where Neural network training processes is mentioned: [Pg.167]    [Pg.94]    [Pg.96]    [Pg.148]    [Pg.657]    [Pg.248]    [Pg.259]    [Pg.548]    [Pg.1007]    [Pg.569]    [Pg.1300]    [Pg.118]    [Pg.148]    [Pg.573]    [Pg.275]    [Pg.455]    [Pg.494]    [Pg.2]    [Pg.5]    [Pg.481]    [Pg.689]    [Pg.652]    [Pg.57]    [Pg.115]    [Pg.115]    [Pg.9]    [Pg.15]    [Pg.199]    [Pg.268]    [Pg.379]    [Pg.367]    [Pg.373]    [Pg.374]    [Pg.378]    [Pg.251]    [Pg.257]    [Pg.111]    [Pg.538]    [Pg.242]    [Pg.705]    [Pg.708]   
See also in sourсe #XX -- [ Pg.11 ]




SEARCH



Network processes

Neural network

Neural network algorithm training process

Neural networking

Neural processes

Training network

Training neural network

© 2024 chempedia.info