Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Feed-back networks

Neural networks can be broadly classified based on their network architecture as feed-forward and feed-back networks, as shown in Fig. 3. In brief, if a neuron s output is never dependent on the output of the subsequent neurons, the network is said to be feed forward. Input signals go only one way, and the outputs are dependent on only the signals coming in from other neurons. Thus, there are no loops in the system. When dealing with the various types of ANNs, two primary aspects, namely, the architecture and the types of computations to be per-... [Pg.4]

Current feed-forward network architectures work better than the current feed-back architectures for a number of reasons. First, the capacity of feed-back networks is unimpressive. Secondly, in the running mode, feed-forward models are faster, since they need to make one pass through the system to find a solution. In contrast, feed-back networks must cycle repetitively until... [Pg.4]

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

Figure 3 Feed-back and feed-forward artificial neural networks. Figure 3 Feed-back and feed-forward artificial neural networks.
Feed-back models can be constructed and trained. In a constructed model, the weight matrix is created by adding the output product of every input pattern vector with itself or with an associated input. After construction, a partial or inaccurate input pattern can be presented to the network and, after a time, the network converges to one of the original input patterns. Hopfield and BAM are two well-known constructed feed-back models. [Pg.4]

The processing elements are typically arranged in layers one of the most commonly used arrangements is known as a back propagation, feed forward network as shown in Figure 7.8. In this network there is a layer of neurons for the input, one unit for each physicochemical descriptor. These neurons do no processing, but simply act as distributors of their inputs (the values of the variables for each compound) to the neurons in the next layer, the hidden layer. The input layer also includes a bias neuron that has a constant output of 1 and serves as a scaling device to ensure... [Pg.175]

The four experiments done previously with Rnp (= 0.5, 1, 3, 4) were used to train the neural network and the experiment with / exp = 2 was used to validate the system. Dynamic models of process-model mismatches for three state variables (i.e. X) of the system are considered here. They are the instant distillate composition (xD), accumulated distillate composition (xa) and the amount of distillate (Ha). The inputs and outputs of the network are as in Figure 12.2. A multilayered feed forward network, which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, is used in this work. The error between the actual mismatch (obtained from simulation and experiments) and that predicted by the network is used as the error signal to train the network as described earlier. [Pg.376]

If the pathway or segment of a portion producing a non-trace intermediate is irreversible, no subsequent portion of the overall network feeds back into it. As a rule, this allows the subsequent portion or portions to be studied independently by using the separately synthesized non-trace intermediate as starting material. It also allows the portion yielding the non-trace intermediate to be studied independently For this purpose, all subsequent intermediates and products are lumped with the intermediate produced by the portion (i.e., the concentrations are added) to obtain the total production of the portion. Alternatively, before analysis, all intermediates are converted to end products, and only these then need to be analyzed for and lumped. [Pg.180]

If a pathway or network turns out to be non-simple, a good strategy is to try to break it up into piecewise simple portions that can be studied independently. Whether and how this can be done depends on the reaction at hand. The job is easiest if the portions are irreversible, so that none of them feeds back into a preceding one, and if the non-trace intermediates can be synthesized. [Pg.191]

Fig. 8.16. Structure of a feed-back artificial neural network 1(0 — current Q(i) — SoC, U(i) = voltage AQ(i) = change of SoC a = parameter. Fig. 8.16. Structure of a feed-back artificial neural network 1(0 — current Q(i) — SoC, U(i) = voltage AQ(i) = change of SoC a = parameter.
The potentiometric recorder does not have a time constant of the form normally associated with the amplifier which, in general, results from a capacity resistance network intrinsic in the amplifier circuit. The response of an amplifier to an instantaneously applied constant voltage is normally an exponential function of time, whereas for a potentiometric recorder the response is usually linearly related to time. The linear response results from the feed back circuitry incorporated in the sensor system which is necessary for stability. In Figure... [Pg.40]

Some of the pioneering studies published by several respected authors in the chemometrics field " employed Kohonen neural networks (SOM) to diagnose calibration problems related to the use of AAS spectral lines. As they focused on classifying potential calibration lines, they used SOM to perform a sort of pattern recognition. Often, SOMs (which were outlined briefly in Section 6.2.2) are best suited for performing classification tasks, whereas error back-propagation feed-forward networks (BPNs) are preferred for calibration purposes. ... [Pg.399]

The feed-forward network can be trained offline in batch mode, using data or a look-up table with any of the training algorithms in Back Propagation. The back propagation algorithm for multilayer networks is a gradient descent optimization procedure in which minimization of a mean square... [Pg.570]

Affolter and Clerc" used the connectivity-based encoded structures as input to a two-layer feed-forward network that was trained by the back-propagation learning algorithm. In these experiments correlation coefficients of up to 0.8 for the prediction and up to 0.99 for the recall were achieved. Figure 8... [Pg.1304]

In one studydata derived from IR and C NMR spectra of a compound, along with its molecular formula, were conjoined in a single vector which served as input to a two-layer, feed-forward network trained by back-propagation (Figure 3). The simplex method was used to optimize network parameters the number of hidden units was optimized by experimentation. [Pg.2792]

Much like the US Christmas Bird Counts, the general public are used to produce the data. Practical conservation is aided by providing the opportunity and the tools for large numbers of people to monitor, providing a more extensive monitoring network, which raises awareness, and provides data that feeds back into conservation research (e.g. interpreting trends), and could feed back into refining taxonomy (e.g. if decisions are based on sympatry/allopatry with little distributional data). [Pg.38]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l). Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l).
Since the activity of unit i depends on the activity of all nodes closer to the input, we need to work through the layers one at a time, from input to output. As feedforward networks contain no loops that feed the output of one node back to a node earlier in the network, there is no ambiguity in doing this. [Pg.34]


See other pages where Feed-back networks is mentioned: [Pg.4]    [Pg.4]    [Pg.4]    [Pg.4]    [Pg.4]    [Pg.5]    [Pg.229]    [Pg.387]    [Pg.176]    [Pg.367]    [Pg.73]    [Pg.161]    [Pg.119]    [Pg.226]    [Pg.401]    [Pg.66]    [Pg.153]    [Pg.93]    [Pg.107]    [Pg.209]    [Pg.114]    [Pg.736]    [Pg.2794]    [Pg.147]    [Pg.42]    [Pg.689]    [Pg.115]    [Pg.15]    [Pg.205]    [Pg.180]   
See also in sourсe #XX -- [ Pg.4 ]




SEARCH



© 2024 chempedia.info