Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Feed forward

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

A fixed-bed reactor for this hydrolysis that uses feed-forward control has been described (11) the reaction, which is first order ia both reactants, has also been studied kiaeticaHy (12—14). Hydrogen peroxide interacts with acetyl chloride to yield both peroxyacetic acid [79-21-0] and acetyl peroxide... [Pg.81]

An extraction plant should operate at steady state in accordance with the flow-sheet design for the process. However, fluctuation in feed streams can cause changes in product quaUty unless a sophisticated system of feed-forward control is used (103). Upsets of operation caused by flooding in the column always force shutdowns. Therefore, interface control could be of utmost importance. The plant design should be based on (/) process control (qv) decisions made by trained technical personnel, (2) off-line analysis or limited on-line automatic analysis, and (J) control panels equipped with manual and automatic control for motor speed, flow, interface level, pressure, temperature, etc. [Pg.72]

State-of-the-Ahi Control. Computer control using feed-forward capabiUty can save 2—20% of a unit s utiUties by reducing the margin of safety (5). Unless the discipline of a controller forces the reduction of the safety margin, operators typically opt for increased safety. Operators are probably correct to do so when a proper set of analy2ers and controllers has not been provided and maintained. [Pg.85]

A reflux reduction of 15% is typical. Improved control achieves this by permitting a reduction in the margin of safety that the operators use to handle changes in feed conditions. The key element is the addition of feed-forward capabiUty, which automatically handles changes in feed flow and composition. One of the reasons for increased use of features such as feed-forward control is the reduced cost of computers and online analy2ers. [Pg.230]

Automated controls for flocciJating reagents can use a feedforward mode based on feed turbidity and feed volumetric rate, or a feed-back mode incorporating a streaming current detector on the flocculated feed. Attempts to control coagulant addition on the basis of overflow turbidity generally have been less successful. Control for pH has been accomplished by feed-forward modes on the feed pH and by feed-back modes on the basis of clarifier feedwell or external reaction tank pH. Control loops based on measurement of feedwell pH are useful for control in apphcations in which flocculated sohds are internaUy recirculated within the clarifier feedwell. [Pg.1689]

The kinetic study was made in two parts. First, a feed-forward design was executed based on variations in feed conditions. A larger study was made by feed-back design, where conditions were specified at the discharge of the reactor. Details of the two designs can be seen on the tables in Figures 6.3.2 and 6.3.3. [Pg.128]

Figure 6.3.2 shows the feed-forward design, in which acrolein and water were included, since previous studies had indicated some inhibition of the catalytic rates by these two substances. Inert gas pressure was kept as a variable to check for pore diffusion limitations. Since no large diffusional limitation was shown, the inert gas pressure was dropped as an independent variable in the second study of feed-back design, and replaced by total pressure. For smaller difftisional effects later tests were recommended, due to the extreme urgency of this project. [Pg.128]

Timoshenko et al (1967) recommended running a set of experiments in a CSTR on feed composition (now called feed-forward study), and then statistically correlating the discharge concentrations and rates with feed conditions by second order polynomials. In the second stage, mathematical experiments are executed on the previous empirical correlation to find the form and constants for the rate expressions. An example is presented for the dehydrogenation of butane. [Pg.142]

Load—during the back flow the shaft torque is reduced, then restored with feed forward giving a torsional pulse with each cycle. [Pg.363]

P is a vector of inputs and T a vector of target (desired) values. The command newff creates the feed-forward network, defines the activation functions and the training method. The default is Fevenberg-Marquardt back-propagation training since it is fast, but it does require a lot of memory. The train command trains the network, and in this case, the network is trained for 50 epochs. The results before and after training are plotted. [Pg.423]

Appropriate design features may include feed-forward temperature control, high temperature alarms, high-temperature cutouts to stop feed flow and open a vent to atmospheric or closed system, adequate temperature monitoring through catalyst beds, etc. [Pg.145]

Feed-forward (compensating) of air or water temperature from outdoor temperature or for input or extract systems... [Pg.776]

In buildings that are divided into zones with a central heating system, it is common to change the water temperature depending on the outdoor temperature. In this example a function called feed-forward or compensating is used. Figure 9.55 shows how the water temperature changes as a function of the outdoor temperature. [Pg.779]

The structure of a neural network forms the basis for information storage and governs the learning process. The type of neural network used in this work is known as a feed-forward network the information flows only in the forward direction, i.e., from input to output in the testing mode. A general structure of a feed-forward network is shown in Fig. I. Connections are made be-... [Pg.2]

Figure 1 A general structure of a feed-forward neural network. Figure 1 A general structure of a feed-forward neural network.
The neurons in both the hidden and output layers perform summing and nonlinear mapping functions. The functions carried out by each neuron are illustrated in Fig. 2. Each neuron occupies a particular position in a feed-forward network and accepts inputs only from the neurons in the preceding layer and sends its outputs to other neurons in the succeeding layer. The inputs from other nodes are first weighted and then summed. This summing of the weighted inputs is carried out by a processor within the neuron. The sum that is obtained is called the activation of the neuron. Each activated neu-... [Pg.3]

Neural networks can be broadly classified based on their network architecture as feed-forward and feed-back networks, as shown in Fig. 3. In brief, if a neuron s output is never dependent on the output of the subsequent neurons, the network is said to be feed forward. Input signals go only one way, and the outputs are dependent on only the signals coming in from other neurons. Thus, there are no loops in the system. When dealing with the various types of ANNs, two primary aspects, namely, the architecture and the types of computations to be per-... [Pg.4]

Figure 3 Feed-back and feed-forward artificial neural networks. Figure 3 Feed-back and feed-forward artificial neural networks.
The second main category of neural networks is the feedforward type. In this type of network, the signals go in only one direction there are no loops in the system as shown in Fig. 3. The earliest neural network models were linear feed forward. In 1972, two simultaneous articles independently proposed the same model for an associative memory, the linear associator. J. A. Anderson [17], neurophysiologist, and Teuvo Kohonen [18], an electrical engineer, were unaware of each other s work. Today, the most commonly used neural networks are nonlinear feed-forward models. [Pg.4]

Current feed-forward network architectures work better than the current feed-back architectures for a number of reasons. First, the capacity of feed-back networks is unimpressive. Secondly, in the running mode, feed-forward models are faster, since they need to make one pass through the system to find a solution. In contrast, feed-back networks must cycle repetitively until... [Pg.4]

Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l). Figure 20 Feed-forward neural network training and testing results with back-propagation training for solvent activity predictions in polar binaries (with learning parameter rj = O.l).
Monitor Ammonia Chloride in the Overhead Water, keep Ammonia Sulfide <5,000 ppm Use Steam Condensate as Water Wash at a Rate of 1-2 gpm/1000 bbl of Fresh Feed Use Ammonium Polysulfide Solution (especially if HCN > 25 ppm) to 10-20 ppm Residual HCN Make sure the Wash Water is injected uniformly into the Gas Stream Use Feed Forward Water Wash Scheme instead of Reverse Cascade ... [Pg.262]

We are now ready to introduce the backpropagation learning rule (also called the generalized delta rule) for multidayercd perceptrons, credited to Rumelhart and McClelland [rumel86a]. Figure 10.12 shows a schematic of the multi-layered per-ceptron s structure. Notice that the design shown, and the only kind we will consider in this chapter, is strictly feed-forward. That is to say, information always flows from the input layer to each hidden layer, in turn, and out into the output layer. There are no feedback loops anywhere in the system. [Pg.540]

Fig. 10.13 Sclicinatic repn sentation and notation for a three-layer feed-forward multi-layer per-ceptron see text. Fig. 10.13 Sclicinatic repn sentation and notation for a three-layer feed-forward multi-layer per-ceptron see text.
Fundamentally, all feed-forward backpropagating nets follow the same five basic steps of a model development cycle ... [Pg.546]

In this short initial communication we wish to describe a general purpose continuous-flow stirred-tank reactor (CSTR) system which incorporates a digital computer for supervisory control purposes and which has been constructed for use with radical and other polymerization processes. The performance of the system has been tested by attempting to control the MWD of the product from free-radically initiated solution polymerizations of methyl methacrylate (MMA) using oscillatory feed-forward control strategies for the reagent feeds. This reaction has been selected for study because of the ease of experimentation which it affords and because the theoretical aspects of the control of MWD in radical polymerizations has attracted much attention in the scientific literature. [Pg.253]

This has been done illustrating a feed-forward process [3]. Another application of these multistep reactions is the study of metabolic networks. Kier and colleagues have reported on such an example [4],... [Pg.143]

The basic component of the neural network is the neuron, a simple mathematical processing unit that takes one or more inputs and produces an output. For each neuron, every input has an associated weight that defines its relative importance, and the neuron simply computes the weighted sum of all the outputs and calculates an output. This is then modified by means of a transformation function (sometimes called a transfer or activation function) before being forwarded to another neuron. This simple processing unit is known as a perceptron, a feed-forward system in which the transfer of data is in the forward direction, from inputs to outputs, only. [Pg.688]

Many different types of networks have been developed. They all consist of small units, neurons, that are interconnected. The local behaviour of these units determines the overall behaviour of the network. The most common is the multi-layer-feed-forward network (MLF). Recently, other networks such as the Kohonen, radial basis function and ART networks have raised interest in the chemical application area. In this chapter we focus on the MLF networks. The principle of some of the other networks are explained and we also discuss how these networks relate with other algorithms, described elsewhere in this book. [Pg.649]


See other pages where Feed forward is mentioned: [Pg.65]    [Pg.225]    [Pg.129]    [Pg.256]    [Pg.344]    [Pg.166]    [Pg.4]    [Pg.4]    [Pg.5]    [Pg.21]    [Pg.226]    [Pg.911]    [Pg.264]    [Pg.129]   
See also in sourсe #XX -- [ Pg.21 , Pg.26 , Pg.29 , Pg.34 , Pg.41 , Pg.104 ]




SEARCH



Adaptive feed forward controller

Advanced control system feed forward

Control combined feed-forward/feedback

Control system feed-forward

Evaporation forward feed

Evaporation forward-feed multiple-effect

Evaporators backward and forward feed

Feed forward back propagation

Feed-forward activator

Feed-forward and ratio control

Feed-forward control

Feed-forward control Subject

Feed-forward control strategy

Feed-forward control strategy simulation results of set vs achieved

Feed-forward converter

Feed-forward inhibition

Feed-forward mechanisms

Feed-forward network architectures

Feed-forward network, artificial neural

Feed-forward networks

Feed-forward neural nets

Feed-forward neural network

Feed-forward stimulation

Forward

Forwarder

Multilayer feed forward (MLF) networks

Multilayer feed-forward network

Multiple-effect evaporators Forward feed

Neural feed-forward

Neural multi-layer-feed-forward network

Neural networks feed-forward computational

Process control feed-forward

Simple Feed-Forward Network Example

Three-layer forward-feed neural

Three-layer forward-feed neural network

© 2024 chempedia.info