Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural nets

Neural networks have been proposed as an alternative way to generate quantitative structure-activity relationships [Andrea and Kalayeh 1991]. A commonly used type of neural net contains layers of units with connections between all pairs of units in adjacent layers (Figure 12.38). Each unit is in a state represented by a real value between 0 and 1. The state of a unit is determined by the states of the units in the previous layer to which it is connected and the strengths of the weights on these connections. A neural net must first be trained to perform the desired task. To do this, the network is presented with a... [Pg.719]

Neuleptil Neuphor Neupramir Neural nets Neural stimulators Neuraminidase... [Pg.666]

Since biological systems can reasonably cope with some of these problems, the intuition behind neural nets is that computing systems based on the architecture of the brain can better emulate human cognitive behavior than systems based on symbol manipulation. Unfortunately, the processing characteristics of the brain are as yet incompletely understood. Consequendy, computational systems based on brain architecture are highly simplified models of thek biological analogues. To make this distinction clear, neural nets are often referred to as artificial neural networks. [Pg.539]

Neural net architectures come in many davors, differing in the functions used in the nodes, the number of nodes and layers, thek connectivity, and... [Pg.539]

Neural nets can also be used for modeling physical systems whose behavior is poorly understood, as an alternative to nonlinear statistical techniques, eg, to develop empirical relationships between independent and dependent variables using large amounts of raw data. [Pg.540]

In the chemical engineering domain, neural nets have been appHed to a variety of problems. Examples include diagnosis (66,67), process modeling (68,69), process control (70,71), and data interpretation (72,73). Industrial appHcation areas include distillation column operation (74), fluidized-bed combustion (75), petroleum refining (76), and composites manufacture (77). [Pg.540]

A key featui-e of MPC is that a dynamic model of the pi ocess is used to pi-edict futui e values of the contmlled outputs. Thei-e is considei--able flexibihty concei-ning the choice of the dynamic model. Fof example, a physical model based on fifst principles (e.g., mass and energy balances) or an empirical model coiild be selected. Also, the empirical model could be a linear model (e.g., transfer function, step response model, or state space model) or a nonhnear model (e.g., neural net model). However, most industrial applications of MPC have relied on linear empirical models, which may include simple nonlinear transformations of process variables. [Pg.740]

P Stolorz, A Lapedes, Y Xia. Predicting protein secondary structure using neural net and statistical methods. J Mol Biol 225 363-377, 1992. [Pg.348]

Widrow, B. (1987) The Original Adaptive Neural Net Broom-Balancer. In Proc. IEEE Int. Symp. Circuits and Systems, pp. 351-357. [Pg.432]

R. Keshavaraj, R. W. Tock, R. S. Narayan, and R. A. Bartsch, Fluid Property Prediction of Siloxanes with the Aid of Artificial Neural Nets, Polymer-Plastics Technology and Engineering, i5(6) 971-982 ( 996). [Pg.32]

Table 10.1 lists some important developments in neural net research. This list is by no means exhaustive and is intended only to highlight some of the key events. [Pg.509]

The human brain is a neural net consisting of about ten billion interconnected neurons. Figure 10.1-a shows a schematic representation of a single neuron. At the ri.sk of grossly oversimplifying the brain s enormously complex physiology, we will only focus on a relatively few functional parts of a single neuron. [Pg.510]

Now, to be sure, McCulloch-Pitts neurons are unrealistically rendered versions of the real thing. For example, the assumption that neuronal firing occurs synchronously throughout the net at well defined discrete points in time is simply wrong. The tacit assumption that the structure of a neural net (i.e. its connectivity, as defined by the set of synaptic weights) remains constant over time is known be false as well. Moreover, while the input-output relationship for real neurons is nonlinear, real neurons are not the simple threshold devices the McCulloch-Pitts model assumes them to be. In fact, the output of a real neuron depends on its weighted input in a nonlinear but continuous manner. Despite their conceptual drawbacks, however, McCulloch-Pitts neurons are nontrivial devices. McCulloch-Pitts were able to show that for a suitably chosen set of synaptic weights wij, a synchronous net of their model neurons is capable of universal computation. This means that, in principle, McCulloch-Pitts nets possess the same raw computational power as a conventional computer (see section 6.4). [Pg.511]

To see what we mean by linear separability, we consider a simple problem that is not linearly separable the so-called XOR problem - to teach a neural net the exclusive-OR function (table 10.2). [Pg.515]

While, as mentioned at the close of the last section, it took more than 15 years following Minsky and Papert s criticism of simple perceptrons for a bona-fide multilayered variant to finally emerge (see Multi-layeved Perceptrons below), the man most responsible for bringing respectability back to neural net research was the physicist John J, Hopfield, with the publication of his landmark 1982 paper entitled Neural networks and physical systems with emergent collective computational abilities [hopf82]. To set the stage for our discussion of Hopfield nets, we first pause to introduce the notion of associative memory. [Pg.518]

Hopfield s neural net model addressed the basic associative memory problem [hopf82] Given some set of patterns Vi, construct a neural net such that when it is presented with an arbitrary pattern V, not necessarily an element of the given set, it responds with a pattern from the given set that most closely resembles" the presented pattern. [Pg.518]

Fig. 10.4 Basins of attraction in the partitioned phcise space of a Hopfield neural net. Fig. 10.4 Basins of attraction in the partitioned phcise space of a Hopfield neural net.
Hopfield s model consists of a fnlly-coimected, symmetrically-weighted wij = Wji) McCulloch- Pitts neural net where the value of the neuron is updated according to ... [Pg.520]

To see that this is a reasonable approach to take, we look more closely at equation 10.9. It is easy to show that the energy function is in fact a Lyapunov Function. In particular, as the neural net evolves according to the dynamics specified by equation 10.7, itself either remains constant or decreases. The attractors of the system therefore reside at the local minima of the energy surface. [Pg.521]

Pseudo-Code Implementation The Back-propagation algorithm outlined above may be implemented by following seven steps, to be applied for each pattern p . Assume we have a neural net with L layers (/ 1,2,...,L). Let h represent the output of the neuron in the / layer hf is therefore equal to the input, <7 . The weight of the connection between li- and h[ will be labeled by Wij. [Pg.544]


See other pages where Neural nets is mentioned: [Pg.539]    [Pg.539]    [Pg.539]    [Pg.540]    [Pg.540]    [Pg.540]    [Pg.540]    [Pg.540]    [Pg.2]    [Pg.274]    [Pg.275]    [Pg.287]    [Pg.341]    [Pg.508]    [Pg.508]    [Pg.508]    [Pg.508]    [Pg.509]    [Pg.509]    [Pg.510]    [Pg.512]    [Pg.512]    [Pg.514]    [Pg.519]    [Pg.519]    [Pg.519]    [Pg.546]    [Pg.546]    [Pg.547]    [Pg.547]    [Pg.549]   
See also in sourсe #XX -- [ Pg.288 ]

See also in sourсe #XX -- [ Pg.55 ]

See also in sourсe #XX -- [ Pg.272 ]




SEARCH



Feed-forward neural nets

Kohonen neural nets

Neural net methods

Use of Neural Net Computing Statistical Modelling

© 2024 chempedia.info