Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Feed forward back propagation

Artificial neural networks (ANNs) represent, as opposed to PLS and MLR, a nonlinear statistical analysis technique [86]. The most commonly used N N is of the feed-forward back-propagation type (Figure 14.2). As is the case of both PLS and MLR, there are a few aspects of NN to be considered when using this type of analysis technique ... [Pg.390]

Particulate matter. Neural networks. Feed forward back propagation. Radial basis functions. Estimation Abstract... [Pg.421]

Artificial nenral models for predicting fractal dimension have been developed nsing mnlti-layer feed-forward back propagation algorithm. To constrnct the nenral network,... [Pg.201]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l). Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l).
Some of the pioneering studies published by several reputed authors in the chemometrics field [55] employed Kohonen neural networks to diagnose calibration problems related to the use of AAS spectral lines. As they focused on classifying potential calibration lines, they used Kohonen neural networks to perform a sort of pattern recognition. Often Kohonen nets (which were outlined briefly in Section 5.4.1) are best suited to perform classification tasks, whereas error back-propagation feed-forwards (BPNs) are preferred for calibration purposes [56]. [Pg.270]

Zhang et al. [78] analysed the metal contents of serum samples by ICP-AES (Fe, Ca, Mg, Cr, Cu, P, Zn and Sr) to diagnose cancer. BAM was compared with multi-layer feed-forward neural networks (error back-propagation). The BAM method was validated with independent prediction samples using the cross-validation method. The best results were obtained using BAM networks. [Pg.273]

The processing elements are typically arranged in layers one of the most commonly used arrangements is known as a back propagation, feed forward network as shown in Figure 7.8. In this network there is a layer of neurons for the input, one unit for each physicochemical descriptor. These neurons do no processing, but simply act as distributors of their inputs (the values of the variables for each compound) to the neurons in the next layer, the hidden layer. The input layer also includes a bias neuron that has a constant output of 1 and serves as a scaling device to ensure... [Pg.175]

The four experiments done previously with Rnp (= 0.5, 1, 3, 4) were used to train the neural network and the experiment with / exp = 2 was used to validate the system. Dynamic models of process-model mismatches for three state variables (i.e. X) of the system are considered here. They are the instant distillate composition (xD), accumulated distillate composition (xa) and the amount of distillate (Ha). The inputs and outputs of the network are as in Figure 12.2. A multilayered feed forward network, which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, is used in this work. The error between the actual mismatch (obtained from simulation and experiments) and that predicted by the network is used as the error signal to train the network as described earlier. [Pg.376]

The prediction power of theoretical modeling and an ANN model in IPC were compared. A feed-forward layered back-propagation ANN was used the input layer consisted of three neurons representing the fraction of organic modifier, the IPR,... [Pg.50]

The first application of a neural network in NMR was proposed by Thomsen and Meyer who analyzed one-dimensional spectra of simple molecules before application to complex oligosaccharides. Kjter and Poulsen - showed that the center of COSY cross-peaks can be found using neural networks. Their implementation consists of a feed-forward three-layer with 256 inputs programmed using a back-propagation error algorithm. As shown by Come et NOESY... [Pg.193]

In this study, feed forward ANN with one hidden layer composed of four neurons was selected. The ANN was trained using back-propagation algorithm. The same ex-... [Pg.200]

To explain the back propagation algorithm, a simple feed-forward neural network of three layers (input, hidden and output) is used. The network input will be denoted by Xj, the output of the hidden neurons by Hi, that of the output neurons by >i. The weights of the links between the input and the hidden layer are written as w,j, where i refers to the nmnber of the input neuron and j to the number of the hidden neuron. The weights of the links between the hidden and the output layer are denoted as again j stands for the number of the hidden neuron and k for the number of the output neuron. An example network with notation is shown in Fig. 27.3, This network has three input neurons, three hidden neurons and two output neurons, in this case the input layer passes on the inputs, i.e. Ij = Xj. [Pg.364]


See other pages where Feed forward back propagation is mentioned: [Pg.133]    [Pg.137]    [Pg.309]    [Pg.421]    [Pg.246]    [Pg.1318]    [Pg.133]    [Pg.137]    [Pg.309]    [Pg.421]    [Pg.246]    [Pg.1318]    [Pg.104]    [Pg.115]    [Pg.205]    [Pg.180]    [Pg.708]    [Pg.387]    [Pg.259]    [Pg.205]    [Pg.121]    [Pg.176]    [Pg.366]    [Pg.367]    [Pg.73]    [Pg.179]    [Pg.977]    [Pg.209]    [Pg.66]    [Pg.422]    [Pg.1789]    [Pg.339]    [Pg.271]    [Pg.4549]    [Pg.364]    [Pg.209]    [Pg.701]    [Pg.381]    [Pg.342]   
See also in sourсe #XX -- [ Pg.411 , Pg.423 ]




SEARCH



Back-propagation

Feed-forward

Forward

Forward propagation

Forwarder

© 2024 chempedia.info