Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network perception

Forward Dynamic Neural MIMO NARX model used in this paper is a combination between the Multi-Layer Perception Neural Networks (MLPNN) stiucturc aird the Auto-Regressive with exogenous input (ARX) model. Due to this corrrbirratiorr. Forward MIMO NARX model possesses both of powerful universal approximating feature from MLPNN stmcturc aird strong predictive feature from nonlinear ARX model. [Pg.39]

The local dynamics of tire systems considered tluis far has been eitlier steady or oscillatory. However, we may consider reaction-diffusion media where tire local reaction rates give rise to chaotic temporal behaviour of tire sort discussed earlier. Diffusional coupling of such local chaotic elements can lead to new types of spatio-temporal periodic and chaotic states. It is possible to find phase-synchronized states in such systems where tire amplitude varies chaotically from site to site in tire medium whilst a suitably defined phase is synclironized tliroughout tire medium 51. Such phase synclironization may play a role in layered neural networks and perceptive processes in mammals. Somewhat suriDrisingly, even when tire local dynamics is chaotic, tire system may support spiral waves... [Pg.3067]

A Kohonen network is a neural network which uses an unsupervised learning strategy. It can be used for, e.g, similarity perception, clustering, or classification tasks. [Pg.481]

The main characteristics of the method, developed in our group for reaction classification arc 1) the representation of a reaction by physicochemical values calculated for the bonds being broken and made during the reaction, and 2 use of the unsupervised learning method of a self-organi2ing neural network for the perception of similarity of chemical reactions [3, 4],... [Pg.545]

Neural networks are essentially non-linear regression models based on a binary threshold unit (McCulloch and Pitts, 1943). The structure of neural networks, called a perception, consists of a set of nodes at different layers where the node of a layer is linked with all the nodes of the next layer (Rosenblatt, 1962). The role of the input layer is to feed input patterns to intermediate layers (also called hidden layers) of units that are followed by an output result layer where the result of computation is read-off. Each one of these units is a neuron that computes a weighted sum of its inputs from other neurons at a previous layer, and outputs a one or a zero according to whether the sum is above or below a... [Pg.175]

Another division of neural networks corresponds to the number of layers a simple perception has only one layer (Minski and Papert, 1969), whereas a multilayer perception that has more than one layei (Hertz et al., 1991). This simple differentiation means that network architecture is very important and each application requires its own design. To get good results one should store in the network as much knowledge as possible and use criteria for optimal network architecture as the number of units, the number of connections, the learning time, cost and so on. A genetic algorithm can be used to search the possible architectures (Whitley and Hanson, 1989). [Pg.176]

Not all neural networks are the same their connections, elemental functions, training methods and applications may differ in significant ways. The types of elements in a network and the connections between them are referred to as the network architecture. Commonly used elements in artificial neural networks will be presented in Chapter 2. The multilayer perception, one of the most commonly used architectures, is described in Chapter 3. Other architectures, such as radial basis function networks and self organizing maps (SOM) or Kohonen architectures, will be described in Chapter 4. [Pg.17]

From the figure it is easy to see that an infinite number of lines can be drawn that separate the depressed points from the not depressed points in the plane. This is a characteristic of regression and neural network applications rarely is there one solution but a whole family of solutions for a given problem. Nevertheless, a perception can be easily trained to classify patients based on the two hypothetical measures posed. One perception with its trained weights for this particular set of data is shown in Figure 3.2. [Pg.31]

One of the early problems with multilayer perceptrons was that it was not clear how to train them. The perception training rule doesn t apply directly to networks with hidden layers. Fortunately, Rumelhart and others (Rumelhart et al 1986) devised an intuitive method that quickly became adopted and revolutionized the field of artificial neural networks. The method is called back-propagation because it computes the error term as described above and propagates the error backward through the network so that weights to and from hidden units can be modified in a fashion similar to the delta rule for perceptions. [Pg.55]

Protein secondary structure prediction is one of the earliest neural network applications in molecular biology, and has been extensively reviewed. Typified by the work of Qian and Sejnowski (1988) (Figure 10.1), early studies involved the use of perception or three-... [Pg.116]

Bayesian techniques A method of training and evaluating neural networks that is based on a stochastic (probabilistic) approach. The basic idea is that weights have a distribution before training (a prior distribution) and another (posterior) distribution after training. Bayesian techniques have been applied successfully to multilayer perception networks. [Pg.163]


See other pages where Neural network perception is mentioned: [Pg.182]    [Pg.182]    [Pg.539]    [Pg.39]    [Pg.271]    [Pg.79]    [Pg.101]    [Pg.200]    [Pg.75]    [Pg.8]    [Pg.539]    [Pg.323]    [Pg.325]    [Pg.66]    [Pg.39]    [Pg.18]    [Pg.34]    [Pg.60]    [Pg.89]    [Pg.90]    [Pg.94]    [Pg.103]    [Pg.105]    [Pg.106]    [Pg.107]    [Pg.108]    [Pg.108]    [Pg.120]    [Pg.124]    [Pg.135]    [Pg.145]    [Pg.153]    [Pg.179]    [Pg.181]    [Pg.182]    [Pg.184]   


SEARCH



Neural network

Neural networking

Perception

© 2024 chempedia.info