Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural architecture

So far, we have seen that given a neural architecture, the neural network may be trained to arrive at colors that are approximately constant. Evolution is also able to find a solution to the problem of color constancy. One of the remaining questions is about how a computational color constancy algorithm is actually mapped to the human visual system. Several researchers have looked at this problem and have proposed possible solutions. [Pg.204]

Neural Architecture based on Double Opponent Cells... [Pg.205]

Usui and Nakauchi (1997) proposed the neural architecture shown in Figure 8.14. They assume that the reflectances and illuminants can be described by a finite set of basis functions. In their work, they assume that three basis functions are sufficient for both reflectances and illuminants. The neural architecture tries to estimate the coefficients of the reflectance basis functions. [Pg.209]

Figure 8.14 Neural architecture of Usui and Nakauchi (1997). The architecture consists of four layers denoted by A, B, C, and D. The input image is fed into the architecture at the input layer. Layer D is the output layer. (Redrawn from Figure 8.4.1 (page 477) Usui S and Nakauchi S 1997 A neurocomputational model for colour constancy. In (eds. Dickinson C, Murray I and Carded D), John Dalton s Colour Vision Legacy. Selected Proceedings of the International Conference, Taylor Francis, London, pp. 475-482, by permission from Taylor Francis Books, UK.)... Figure 8.14 Neural architecture of Usui and Nakauchi (1997). The architecture consists of four layers denoted by A, B, C, and D. The input image is fed into the architecture at the input layer. Layer D is the output layer. (Redrawn from Figure 8.4.1 (page 477) Usui S and Nakauchi S 1997 A neurocomputational model for colour constancy. In (eds. Dickinson C, Murray I and Carded D), John Dalton s Colour Vision Legacy. Selected Proceedings of the International Conference, Taylor Francis, London, pp. 475-482, by permission from Taylor Francis Books, UK.)...
It seems very likely that many other aspects of human reasoning, besides transitive inference, also depend on the integration of semantic knowledge and working-memory operations with representations derived from those that support visuospatial perception. We hope the model we have described here may provide an example of how the connections between perception and thought may be given explicit realization in a neural architecture. [Pg.304]

Curkovic P, Jerbic B, Stipancic T (2008) Hybridization of adaptive genetic algorithm and ART neural architecture for efficient path planning of a mobile robot. Trans FAMENA 32(2) 11-21... [Pg.120]

M. A. Cohen and S. Grossberg, Appl. Opt., 26, 1866 (1987). Masking Fields A Massively Parallel Neural Architecture for Learning, Recognizing, and Predicting Multiple Groupings of Patterned Data. [Pg.131]

Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),... Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),...
Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]

The objective of this study is to show how data sets of compounds for which dif-ferent biological activities have been determined can be studied. It will be shown how the use of a counter-propagation neural networb can lead to new insights [46]. The cmpha.si.s in this example is placed on the comparison of different network architectures and not on quantitative results. [Pg.508]

Since biological systems can reasonably cope with some of these problems, the intuition behind neural nets is that computing systems based on the architecture of the brain can better emulate human cognitive behavior than systems based on symbol manipulation. Unfortunately, the processing characteristics of the brain are as yet incompletely understood. Consequendy, computational systems based on brain architecture are highly simplified models of thek biological analogues. To make this distinction clear, neural nets are often referred to as artificial neural networks. [Pg.539]

Neural net architectures come in many davors, differing in the functions used in the nodes, the number of nodes and layers, thek connectivity, and... [Pg.539]

Neural networks can be broadly classified based on their network architecture as feed-forward and feed-back networks, as shown in Fig. 3. In brief, if a neuron s output is never dependent on the output of the subsequent neurons, the network is said to be feed forward. Input signals go only one way, and the outputs are dependent on only the signals coming in from other neurons. Thus, there are no loops in the system. When dealing with the various types of ANNs, two primary aspects, namely, the architecture and the types of computations to be per-... [Pg.4]

The specific volumes of all the nine siloxanes were predicted as a function of temperature and the number of monofunctional units, M, and difunctional units, D. A simple 3-4-1 neural network architecture with just one hidden layer was used. The three input nodes were for the number of M groups, the number of D groups, and the temperature. The hidden layer had four neurons. The predicted variable was the specific volumes of the silox-... [Pg.11]

Viscosities of the siloxanes were predicted over a temperature range of 298-348 K. The semi-log plot of viscosity as a function of temperature was linear for the ring compounds. However, for the chain compounds, the viscosity increased rapidly with an increase in the chain length of the molecule. A simple 2-4-1 neural network architecture was used for the viscosity predictions. The molecular configuration was not considered here because of the direct positive effect of addition of both M and D groups on viscosity. The two input variables, therefore, were the siloxane type and the temperature level. Only one hidden layer with four nodes was used. The predicted variable was the viscosity of the siloxane. [Pg.12]

A very simple 2-4-1 neural network architecture with two input nodes, one hidden layer with four nodes, and one output node was used in each case. The two input variables were the number of methylene groups and the temperature. Although neural networks have the ability to learn all the differences, differentials, and other calculated inputs directly from the raw data, the training time for the network can be reduced considerably if these values are provided as inputs. The predicted variable was the density of the ester. The neural network model was trained for discrete numbers of methylene groups over the entire temperature range of 300-500 K. The... [Pg.15]

Hypercubes and other new computer architectures (e.g., systems based on simulations of neural networks) represent exciting new tools for chemical engineers. A wide variety of applications central to the concerns of chemical engineers (e.g., fluid dynamics and heat flow) have already been converted to run on these architectures. The new computer designs promise to move the field of chemical engineering substantially away from its dependence on simplified models toward computer simulations and calculations that more closely represent the incredible complexity of the real world. [Pg.154]

A neural network consists of many neurons organized into a structure called the network architecture. Although there are many possible network architectures, one of the most popular and successful is the multilayer perceptron (MLP) network. This consists of identical neurons all interconnected and organized in layers, with those in one layer connected to those in the next layer so that the outputs in one layer become the inputs in the subsequent... [Pg.688]


See other pages where Neural architecture is mentioned: [Pg.209]    [Pg.326]    [Pg.18]    [Pg.336]    [Pg.390]    [Pg.318]    [Pg.182]    [Pg.153]    [Pg.585]    [Pg.209]    [Pg.326]    [Pg.18]    [Pg.336]    [Pg.390]    [Pg.318]    [Pg.182]    [Pg.153]    [Pg.585]    [Pg.454]    [Pg.494]    [Pg.500]    [Pg.539]    [Pg.539]    [Pg.540]    [Pg.540]    [Pg.362]    [Pg.1]    [Pg.2]    [Pg.2]    [Pg.3]    [Pg.4]    [Pg.5]    [Pg.21]    [Pg.22]    [Pg.27]    [Pg.730]    [Pg.746]   
See also in sourсe #XX -- [ Pg.206 , Pg.210 ]

See also in sourсe #XX -- [ Pg.360 , Pg.362 ]

See also in sourсe #XX -- [ Pg.313 ]




SEARCH



Artificial neural networks architecture

Neural Architecture Using Energy Minimization

Neural Architecture based on Double Opponent Cells

Neural network architecture

© 2024 chempedia.info