Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Autoassociative networks

Kramer, M. A., Nonlinear principal component analysis using autoassociative neural networks. AIChE J. 37, 233-243 (1991). [Pg.268]

All of the studies above have used back propagation multilayer perceptrons and many other varieties of neural network exist that have been applied to PyMS data. These include minimal neural networks,117119 radial basis functions,114120 self-organizing feature maps,110121 and autoassociative neural networks.122123... [Pg.332]

Kramer, M. A. Autoassociative neural networks. Comput. Chem. Eng. 1992,16, 313-328. [Pg.341]

Practically all OH groups are involved in H bonding in a bulk polymer19), since there are many proton acceptors available in epoxy-aromatic amine networks. At room temperature the concentration of free OH groups in networks is lower than 1-2%. The formation enthalpies of different H bonds in the networks measured by the shift of v(OH) vibrations in IR spectra are shown in Table 4. It is seen that the largest part of all H bonds ( 90% in a stoichiometric mixture) comes from the autoassociation of OH groups. [Pg.65]

Autoassociative neural networks provide a special five-layer network structure (Figure 3.6) that can implement nonlinear PCA by reducing variable dimensionality and producing a feature space map that retains the maximum possible amount of information from the original data set [150]. Autoassociative neural networks use conventional feedforward connections and sigmoidal or linear nodal transfer functions. [Pg.63]

The network has three hidden layers, including a bottleneck layer which is of a smaller dimension than either the input layer or the output layer. The network is trained to perform an identity mapping by approximating the input information at the output layer. Since there are fewer nodes in the bottleneck layer than the input or output layers, the bottleneck nodes implement data compression and encode the essential information in the inputs for its reconstruction in subsequent layers. In the NLPCA framework and terminology, autoassociative neural networks seek to provide a mapping of the form... [Pg.63]

Figure 3.6. Network architecture for determination of / nonlinear factors using an autoassociative neural network, a indicates nodes with sigmoidal functions, indicates nodes with sigmoidal or linear functions [150]. Figure 3.6. Network architecture for determination of / nonlinear factors using an autoassociative neural network, a indicates nodes with sigmoidal functions, indicates nodes with sigmoidal or linear functions [150].
We have made some assumptions about how our example network functions. Many types of ANN operate as we have assumed, but some do not, and we now indicate these differences. The just described ANNs are heteroassociative because the desired outputs differ from the inputs. When the desired outputs are the same as the inputs for all the training vectors, the network is autoassociative. This circumstance naturally requires that the number of input PEs be equal to the number of output PEs. Some types of network—for example, backpropaga-tion—may be configured as either hetero- or autoassociative, whereas other types must be heteroassociative, and still others must be autoassociative. [Pg.62]

Single-layer, laterally connected ANNs are autoassociative. They can store many data vectors and are adept at outputting one of these vectors when presented with a noisy or incomplete version of it. Examples are the Hopfield, brain-state-in-a-box,3 and sparse distributed memory networks. [Pg.86]

BAM networks are two-layer feedforward/feedback heteroassodative networks (they can also be autoassociative). An example network is shown in Figure 6. Standard BAMs take bipolar ( 1) inputs and outputs. Adaptive BAMs (ABAM) can take continuous inputs and outputs. In either case, input data should be mutually orthogonal (i.e., independent, nonredundant, and uncorrelated). BAMs were inspired by ART networks but are conceptually sim-... [Pg.93]

ANN to memorize that case. The test set should also contain a representative sampling of cases to realistically assess how the ANN responds to new situations. A word about autoassociation problems is in order here. If your goal is to use an ANN to simply store patterns or to compress data, you really do not need a test set because all you care about are the cases with which you train the network. If you want to pass corrupt data through the ANN to see if the network will output a clean version of the input, you may want to construct a test set to see how well the network can do this training set construction is presumably trivial here you know what data you want to store or compress, and this data is the training set. [Pg.108]

MA Kramer. Autoassociative neural networks. Corn-put. Chem. Engg., 16(4) 313-328,1992. [Pg.159]

Hopfield (1984) extended the concept of his network to autoassociative memories. In the same network structure as shown in Fig. 19.29, the bipolar hard-threshold neurons were used with outputs equal to —1... [Pg.2054]

Like the Hopfield network, the autoassociative memory has limited storage capacity, which is estimated to be about Mmax = 0.15N. When the number of stored patterns is large and close to the memory capacity, the network has a tendency to converge to spurious states, which were not stored. These spurious states are additional minima of the energy function. [Pg.2055]

The concept of the autoassociative memory was extended to bidirectional associative memories (BAM) by Kosko (1987,1988). This memory, shown in Fig. 19.30, is able to associate pairs of the patterns a and b. This is the two-layer network with the output of the second layer connected directly to the input of the first layer. The weight matrix of the second layer is and W for the first layer. The rectangular weight matrix W is obtained as a sum of the cross-correlation matrixes... [Pg.2055]

FIGURE 19.30 An example of the bi-directional autoassociative memory (a) drawn as a two-layer network with circulating signals, (b) drawn as two-layer network with bi directional signal flow. [Pg.2055]


See other pages where Autoassociative networks is mentioned: [Pg.99]    [Pg.110]    [Pg.99]    [Pg.110]    [Pg.287]    [Pg.63]    [Pg.64]    [Pg.72]    [Pg.73]    [Pg.86]    [Pg.97]    [Pg.100]    [Pg.215]    [Pg.2054]    [Pg.82]    [Pg.1041]    [Pg.1042]   
See also in sourсe #XX -- [ Pg.62 , Pg.72 , Pg.86 , Pg.97 , Pg.108 , Pg.110 ]




SEARCH



Autoassociation

Neural networks Autoassociative

© 2024 chempedia.info