Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Layers and networks

Considerable effort has been spent to explain the effect of reinforcement of elastomers by active fillers. Apparently, several factors contribute to the property improvements for filled elastomers such as, e.g., elastomer-filler and filler-filler interactions, aggregation of filler particles, network structure composed of different types of junctions, an increase of the intrinsic chain deformation in the elastomer matrix compared with that of macroscopic strain and some others factors [39-44]. The author does not pretend to provide a comprehensive explanation of the effect of reinforcement. One way of looking at the reinforcement phenomenon is given below. An attempt is made to find qualitative relations between some mechanical properties of filled PDMS on the one hand and properties of the host matrix, i.e., chain dynamics in the adsorption layer and network structure in the elastomer phase outside the adsorption layer, on the other hand. The influence of filler-filler interactions is also of importance for the improvement of mechanical properties of silicon rubbers (especially at low deformation), but is not included in the present paper. [Pg.804]

Seamless Interoperability (roaming) of services b ween platforms, layers and networks... [Pg.46]

The local dynamics of tire systems considered tluis far has been eitlier steady or oscillatory. However, we may consider reaction-diffusion media where tire local reaction rates give rise to chaotic temporal behaviour of tire sort discussed earlier. Diffusional coupling of such local chaotic elements can lead to new types of spatio-temporal periodic and chaotic states. It is possible to find phase-synchronized states in such systems where tire amplitude varies chaotically from site to site in tire medium whilst a suitably defined phase is synclironized tliroughout tire medium 51. Such phase synclironization may play a role in layered neural networks and perceptive processes in mammals. Somewhat suriDrisingly, even when tire local dynamics is chaotic, tire system may support spiral waves... [Pg.3067]

Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),... Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),...
A back-propagation network usually consists of input units, one or more hidden layers and one output layer. Figure 9-16 gives an example of the architecture. [Pg.462]

The feedforward network shown in Figure 10.22 eonsists of a three neuron input layer, a two neuron output layer and a four neuron intermediate layer, ealled a hidden layer. Note that all neurons in a partieular layer are fully eonneeted to all neurons in the subsequent layer. This is generally ealled a fully eonneeted multilayer network, and there is no restrietion on the number of neurons in eaeh layer, and no restrietion on the number of hidden layers. [Pg.349]

Structural classifications of oxides recognize discrete molecular species and structures which are polymeric in one or more dimensions leading to chains, layers, and ultimately, to three-dimensional networks. Some typical examples are in Table 14.14 structural details are given elsewhere under each individual element. The type of structure adopted in any particular case depends (obviously) not only on the... [Pg.641]

The neurons in both the hidden and output layers perform summing and nonlinear mapping functions. The functions carried out by each neuron are illustrated in Fig. 2. Each neuron occupies a particular position in a feed-forward network and accepts inputs only from the neurons in the preceding layer and sends its outputs to other neurons in the succeeding layer. The inputs from other nodes are first weighted and then summed. This summing of the weighted inputs is carried out by a processor within the neuron. The sum that is obtained is called the activation of the neuron. Each activated neu-... [Pg.3]

Attack by alkali solution, hydrofluoric acid and phosphoric acid A common feature of these corrosive agents is their ability to disrupt the network. Equation 18.1 shows the nature of the attack in alkaline solution where unlimited numbers of OH ions are available. This process is not encumbered by the formation of porous layers and the amount of leached matter is linearly dependent on time. Consequently the extent of attack by strong alkali is usually far greater than either acid or water attack. [Pg.880]

Kolmogorov s Theorem (Reformulated by Hecht-Nielson) Any real-valued continuous function f defined on an N-dimensional cube can be implemented by a three layered neural network consisting of 2N -)-1 neurons in the hidden layer with transfer functions from the input to the hidden layer and (f> from all of... [Pg.549]

A glance at the structure of graphite, illustrated in Fig. 1, reveals the presence of voids between the planar, sp -hybridized, carbon sheets. Intercalation is the insertion of ions, atoms, or molecules into this space without the destruction of the host s layered, bonding network. Stacking order, bond distances, and, possibly, bond direction may be altered, but the characteristic, lamellar identity of the host must in some sense be preserved. [Pg.282]

Hartman, E., Keeler, K., and Kowalski, J. K., Layered Neural Networks with Gaussian hidden rmits as uttiversal approximators. Neural Comput. 2, 210 (1990). [Pg.204]

Homik, K., Stinchcombe, M., and White, H., Multi-layer feedforward networks are universal approximators. Neural Networks 2, 359 (1989). [Pg.204]

The general stmcture is shown in Fig. 44.9. The units are ordered in layers. There are three types of layers the input layer, the output layer and the hidden layer(s). All units from one layer are connected to all units of the following layer. The network receives the input signals through the input layer. Information is then passed to the hidden layer(s) and finally to the output layer that produces the response of the network. There may be zero, one or more hidden layers. Networks with one hidden layer make up the vast majority of the networks. The number of units in the input layer is determined by p, the number of variables in the (nxp) matrix X. The number of units in the output layer is determined by q, the number of variables in the inxq) matrix Y, the solution pattern. [Pg.662]

Just as in the perceptron-Iike networks, an additional column of ones is added to the X matrix to accommodate for the offset or bias. This is sometimes explicitly depicted in the structure (see Fig. 44.9b). Notice that an offset term is also provided between the hidden layer and the output layer. [Pg.663]

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

Of the several approaches that draw upon this general description, radial basis function networks (RBFNs) (Leonard and Kramer, 1991) are probably the best-known. RBFNs are similar in architecture to back propagation networks (BPNs) in that they consist of an input layer, a single hidden layer, and an output layer. The hidden layer makes use of Gaussian basis functions that result in inputs projected on a hypersphere instead of a hyperplane. RBFNs therefore generate spherical clusters in the input data space, as illustrated in Fig. 12. These clusters are generally referred to as receptive fields. [Pg.29]


See other pages where Layers and networks is mentioned: [Pg.123]    [Pg.89]    [Pg.484]    [Pg.212]    [Pg.191]    [Pg.221]    [Pg.342]    [Pg.123]    [Pg.89]    [Pg.484]    [Pg.212]    [Pg.191]    [Pg.221]    [Pg.342]    [Pg.454]    [Pg.460]    [Pg.462]    [Pg.500]    [Pg.720]    [Pg.37]    [Pg.74]    [Pg.404]    [Pg.213]    [Pg.3]    [Pg.126]    [Pg.519]    [Pg.265]    [Pg.347]    [Pg.450]    [Pg.689]    [Pg.338]    [Pg.106]    [Pg.127]    [Pg.129]    [Pg.1429]    [Pg.32]    [Pg.54]    [Pg.226]    [Pg.1180]   
See also in sourсe #XX -- [ Pg.221 ]




SEARCH



Layered network

Network layer

Supramolecular Coordination Networks Employing Sulfonate and Phosphonate Linkers From Layers to Open Structures

© 2024 chempedia.info