Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training a neural network

Each data set provides its own neural network. The first time a neural network is created the actual data set is used as training set as well as test set a separate test set can be loaded after training. ARC provides a special feature to automatically divide [Pg.155]

If single-component properties are available, a maximnm of 32 classes can be defined either automatically or manually. Classification can be performed directly with a training set via the context menu. The auto classify command enables classes to be calculated according to exponential, decadic, logarithmic, decadic logarithmic, and linearly distributed properties. The range of properties is divided into the number of classes defined in the window. This enables the user to get the optimum distribution with the optimum number of classes. [Pg.156]

Once class or properties are available in the training set, the training parameters can be selected net dimension and number of epochs, the learn radius and learn rate, and the initialization parameters. Neurons can be arranged in a rectangular or quadratic network, as well as in a toroidal mode that is, the left and right side as well as the upper and lower sides of the topological map are connected to a closed toroidal plane. [Pg.156]

Once the parameters have been selected the training can be started by clicking the train (or train reverse) button. The performance of the training can be visualized by checking the show performance option. In this case a chart will appear that shows the decrease of the overall Euclidean distance after each epoch. Training may be interrupted after each epoch showing the actual training results. [Pg.156]


All elosed-loop eontrol systems operate by measuring the error between desired inputs and aetual outputs. This does not, in itself, generate eontrol aetion errors that may be baek-propagated to train a neural network eontroller. If, however, a neural network of the plant exists, baek-propagation through this network of the system error (r(lcT) — y(lcT)) will provide the neeessary eontrol aetion errors to train the neural network eontroller as shown in Figure 10.29. [Pg.361]

Huang and Tang49 trained a neural network with data relating to several qualities of polymer yarn and ten process parameters. They then combined this ANN with a genetic algorithm to find parameter values that optimize quality. Because the relationships between processing conditions and polymer properties are poorly understood, this combination of AI techniques is a potentially productive way to proceed. [Pg.378]

Another method was proposed by Diederichs et al. (1998). This method is very simple in the sense that it trains a neural network using amino acid sequences as inputs and the z coordinate of Ca atoms in a coordinate frame with the outer membrane in the xy plane, as outputs. [Pg.297]

The C chemical shifts of 29 alkyl (Me,Et) substituted oxanes (830MR94) were used to train a neural network to simulate the C NMR spectra. The neural network, thus trained, was employed to simulate the C NMR spectra of 2-Et, franj-3,5-di-Me-, and 2,2,6-tri-Me-oxanes, respectively, compounds that exist >95% in one preferred chair conformation. In one case, the deviation for one methyl substituent proved to be considerable and was related to other conformers participating in the conformational equilibrium (94ACA221). [Pg.229]

Lee, S. C., and D.V. Heinbuch, Training a Neural Network Based Intrusion Detector to Recognize Novel Attacks, Information Assistance and Security, pp. 40 16, 2000. [Pg.381]

Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost criterion. There are numerous algorithms available for training neural network models most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Recent developments in this field use particle swarm optimization and other swarm intelligence techniques. [Pg.917]

A neural network usually consists of three layers, representing inputs, output, and a hidden layer, which is used to make the connections (Figure 3.9). By training a neural network with known data, it is possible to obtain outputs that can accurately predict such things as polymer concentration mix. [Pg.57]

In this paper we demonstrate an application of data-driv i software developm it in a Bayesian framework such that every computed result arises from within a context and so can be associated with a confidrace estimate whose validity is underpinned by Bayesian principles. This technique, w4iich induces software modules from data samples (e.g., training a neural network), can be contrasted with more traditional, abstract specification drivra, software developmrat that has tended to compute a result and th i added secondary computation to produce an associated confidence measure. [Pg.231]

Step 2 Training a neural network in the created sample set using the back propagation algorithm ... [Pg.161]

The sensor array they made consisted of five different Sn02 gas sensors connected to a computer capable of recording any change in resistance. The tests were preformed at 400 °C by passing a mixture that was primarily wine vapor, mixed with a small amount of air over the sensors. Data from each wine was used to train a neural network (NN), which was then subjected to unknown wine samples. The results of the NN analysis are shown in Figure 3, with complete differentiation of the two... [Pg.302]

To train a neural network, fust a training-set and a test-set of sample data from the process has to be generated. The training-set is build up of pairs of input and output data, called patterns. These patterns are not necessarily unique, an output is allowed to have more than one different inputs however, an input can oidy have one distinctive output. [Pg.364]

Zheng and Du (2009) combined GAs with a neurai network in order to reduce the chemicai residues on kenaf fibers, thus decreasing environment pollution. In a first step, they trained a neural network to find feasible and acceptable machine settings. Then, they defined an objective functiongiven by... [Pg.422]


See other pages where Training a neural network is mentioned: [Pg.361]    [Pg.230]    [Pg.37]    [Pg.275]    [Pg.196]    [Pg.198]    [Pg.123]    [Pg.487]    [Pg.176]    [Pg.48]    [Pg.189]    [Pg.116]    [Pg.581]    [Pg.814]    [Pg.270]    [Pg.270]    [Pg.37]    [Pg.144]    [Pg.51]    [Pg.214]    [Pg.222]    [Pg.61]    [Pg.291]    [Pg.287]    [Pg.2178]    [Pg.2327]    [Pg.120]    [Pg.417]    [Pg.73]   


SEARCH



A -networks

Neural network

Neural networking

Training network

Training neural network

© 2024 chempedia.info