Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Network architecture

A further approach to designing the network topology is to construct a complex network from several single network modules. In a network committee, several networks are trained together to form a committee (Perrone, 1994). A committee or jury decision, given by a weighted combination of the predictions of the members, can yield better [Pg.90]

In a mixture-of-experts model (Jacobs et al., 1991), different expert networks were assigned to tackle sub-tasks of training cases, and an extra gating network was used to decide which of the experts should determine the output. The model discovered a suitable decomposition of the input space as part of the learning process. Later the model was further extended (Jordan Jacobs, 1994) into a hierarchical system with a tree structure. In molecular applications, cascaded networks-where outputs of some networks become the inputs of others-were used to improve performance (Rost Sander, 1994). Multiple neural network modules may run in parallel in order to scale up the system (Wu et al., 1995). More than one network can also be used to extract different (e.g., local vs. global) features (Mahadevan Ghosh, 1994). [Pg.91]

Although the SOM is a type of neural network, its structure is very different from that of the feedforward artificial neural network discussed in Chapter 2. While in a feedforward network nodes are arranged in distinct layers, a SOM is more democratic—every node occupies a site of equal importance in a regular lattice. [Pg.57]

The nodes in a SOM are drawn with connections between them, but these connections serve no real purpose in the operation of the algorithm. In contrast to those in the ANN, no messages pass along these links they are drawn only to make clear the geometry of the network. [Pg.57]

The nodes in a SOM are sometimes referred to as neurons, to emphasize that the SOM is a type of neural network. [Pg.57]

Not only are the lengths of the pattern and weights vectors identical, the individual entries in them share the same interpretation. [Pg.58]

Node weights in a two-dimensional SOM. Each node has its own independent set of weights. [Pg.58]

The fuzzy logic inference system for identification of the Sugeno type model can be implemented as a five layer network (Ojala, 1994 Sfetsos, 2000) and is shown in Fig. 29.1. [Pg.399]

Consider a system with two inputs x and X2 and one output Further assume that the mle base contains two mles which are  [Pg.399]

Process Dynamics and Control Modeling for Control and Prediction. Brian Roffel and Ben Betlem. 2006 John Wiley Sons Ltd. [Pg.399]

The operation of the neuro-fuzzy approach can be described by the following steps  [Pg.400]

Each neuron in this layer corresponds to a linguistic label. The crisp inputs xj and X2 are fuzzified by using membership functions of the linguistic variables, and fi,. Usually, triangular, trapezoid or Gaussian membership curves are used. For example, the Gaussian membership function is defined by  [Pg.400]

Equation (10.57) is in the same form as the discrete-time solution of the state equation (8.76). [Pg.350]


Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),... Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),...
The objective of this study is to show how data sets of compounds for which dif-ferent biological activities have been determined can be studied. It will be shown how the use of a counter-propagation neural networb can lead to new insights [46]. The cmpha.si.s in this example is placed on the comparison of different network architectures and not on quantitative results. [Pg.508]

Neural networks can be broadly classified based on their network architecture as feed-forward and feed-back networks, as shown in Fig. 3. In brief, if a neuron s output is never dependent on the output of the subsequent neurons, the network is said to be feed forward. Input signals go only one way, and the outputs are dependent on only the signals coming in from other neurons. Thus, there are no loops in the system. When dealing with the various types of ANNs, two primary aspects, namely, the architecture and the types of computations to be per-... [Pg.4]

Current feed-forward network architectures work better than the current feed-back architectures for a number of reasons. First, the capacity of feed-back networks is unimpressive. Secondly, in the running mode, feed-forward models are faster, since they need to make one pass through the system to find a solution. In contrast, feed-back networks must cycle repetitively until... [Pg.4]

The specific volumes of all the nine siloxanes were predicted as a function of temperature and the number of monofunctional units, M, and difunctional units, D. A simple 3-4-1 neural network architecture with just one hidden layer was used. The three input nodes were for the number of M groups, the number of D groups, and the temperature. The hidden layer had four neurons. The predicted variable was the specific volumes of the silox-... [Pg.11]

Viscosities of the siloxanes were predicted over a temperature range of 298-348 K. The semi-log plot of viscosity as a function of temperature was linear for the ring compounds. However, for the chain compounds, the viscosity increased rapidly with an increase in the chain length of the molecule. A simple 2-4-1 neural network architecture was used for the viscosity predictions. The molecular configuration was not considered here because of the direct positive effect of addition of both M and D groups on viscosity. The two input variables, therefore, were the siloxane type and the temperature level. Only one hidden layer with four nodes was used. The predicted variable was the viscosity of the siloxane. [Pg.12]

A very simple 2-4-1 neural network architecture with two input nodes, one hidden layer with four nodes, and one output node was used in each case. The two input variables were the number of methylene groups and the temperature. Although neural networks have the ability to learn all the differences, differentials, and other calculated inputs directly from the raw data, the training time for the network can be reduced considerably if these values are provided as inputs. The predicted variable was the density of the ester. The neural network model was trained for discrete numbers of methylene groups over the entire temperature range of 300-500 K. The... [Pg.15]

A neural network consists of many neurons organized into a structure called the network architecture. Although there are many possible network architectures, one of the most popular and successful is the multilayer perceptron (MLP) network. This consists of identical neurons all interconnected and organized in layers, with those in one layer connected to those in the next layer so that the outputs in one layer become the inputs in the subsequent... [Pg.688]

Step 8. Spectra classified using an artificial neural network pattern recognition program. (This program is enabled on a parallel-distributed network of several personal computers [PCs] that facilitates optimization of neural network architecture). [Pg.94]

Overall, the alkali metal alkoxide and aryloxide systems are excellent examples in demonstrating the effects of steric influences on both molecular aggregation and also on the nature of any extended network architecture adopted. The large database of O-M complexes that have now been identified has led to a good deal of predictability regarding the coordination chemistry of these species. [Pg.44]

The proposed neurogenesis-memory clearance hypothesis is attractive because addition and removal of adult-born neurons in local network architecture could gradually destabilize the stored memory traces. Also, adult-generated neurons within the dentate gyrus, the upstream location in the hippocampus, potentially can amplify the destabilization effects. Coincidently, these newborn neurons are short-lived, typically with a life-span of three weeks in rodents [40], which seems to correlate well with the duration of hippocampal dependence of declarative memories. [Pg.872]

Document network architecture and identify systems that serve critical functions or contain sensitive information that requires additional levels of protection. [Pg.132]

The number of neurons in the hidden layer was therefore increased systematically. It was found that a network of one hidden layer consisting of twenty neurons, as shown in Figure 2.6, performed well for both the training and testing data set. More details about the performance of this network will be given later. The network architecture depicted in Figure 2.6 consists of an input layer, a hidden layer, and an output layer. Each neuron in the input layer corresponds to a particular feed property. The neurons... [Pg.37]


See other pages where Network architecture is mentioned: [Pg.454]    [Pg.500]    [Pg.540]    [Pg.540]    [Pg.349]    [Pg.3]    [Pg.4]    [Pg.5]    [Pg.5]    [Pg.21]    [Pg.22]    [Pg.27]    [Pg.746]    [Pg.146]    [Pg.161]    [Pg.172]    [Pg.57]    [Pg.115]    [Pg.116]    [Pg.287]    [Pg.57]    [Pg.93]    [Pg.113]    [Pg.251]    [Pg.132]    [Pg.527]    [Pg.538]    [Pg.543]    [Pg.581]    [Pg.589]    [Pg.589]    [Pg.590]    [Pg.176]    [Pg.135]   
See also in sourсe #XX -- [ Pg.11 , Pg.17 , Pg.65 , Pg.89 , Pg.117 ]

See also in sourсe #XX -- [ Pg.224 ]

See also in sourсe #XX -- [ Pg.85 , Pg.122 , Pg.125 ]




SEARCH



Artificial neural networks architecture

Coordination networks porous architecture

Feed-forward network architectures

Instrument network architecture

Neural network architecture

Systems Network Architecture

© 2024 chempedia.info