Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Generalized perceptron

In order to make the system more powerful, and extend its use to more complex learning applications, we need to make it more complex. For example, the perceptron scheme defined above may be used for classifying n linear-ily separable classes of vectors, with n>2, or with classifying two linearily nonseparable classes. The structure obtained is a neural network with a layer of input nodes and a layer of output nodes. It is called the generalized perceptron of Rumelhart, and is represented in Figure... [Pg.256]

In previous chapters, we have examined a variety of generalized CA models, including reversible CA, coupled-map lattices, reaction-diffusion models, random Boolean networks, structurally dynamic CA and lattice gases. This chapter covers an important field that overlaps with CA neural networks. Beginning with a short historical survey, chapter 10 discusses zissociative memory and the Hopfield model, stocheistic nets, Boltzman machines, and multi-layered perceptrons. [Pg.507]

As we will discuss a bit later on in this section, adding hidden layers but generalizing the McCulloch-Pitts step-function thresholding to a linear function yields a multi-layer perceptron that is fundamentally no more powerful than a simple perceptron that has no hidden layers and uses the McCulloch-Pitts step-function threshold. [Pg.539]

We are now ready to introduce the backpropagation learning rule (also called the generalized delta rule) for multidayercd perceptrons, credited to Rumelhart and McClelland [rumel86a]. Figure 10.12 shows a schematic of the multi-layered per-ceptron s structure. Notice that the design shown, and the only kind we will consider in this chapter, is strictly feed-forward. That is to say, information always flows from the input layer to each hidden layer, in turn, and out into the output layer. There are no feedback loops anywhere in the system. [Pg.540]

Consider a simple perceptron with N continuous-valued inputs and one binary (— 1) output value. In section 10.5.2 we saw how, in general, an A -dimensional input space is separated by an (N — l)-dimensional hyperplane into two distinct regions. All of the points lying on one side of the hyperplane yield the output -)-l all the points on the other side of the hyperplane yield -1. [Pg.550]

Neural networks have been introduced in QSAR for non-linear Hansch analyses. The Perceptron, which is generally considered as a forerunner of neural networks has been developed by the Russian school of Rastrigin and coworkers [62] within the context of QSAR. The learning machine is another prototype of neural network which has been introduced in QSAR by Jurs et al. [63] for the discrimination between different types of compounds on the basis of their properties. [Pg.416]

The representation of nerve cells as symbolic devices such as perceptrons led to the development of the computer-based models termed artificial neural nefworks. Since proteins in general, and enzymes in particular, are capable... [Pg.127]

Among the most wide-spread neural networks are feedforward networks, namely multilayer perceptron (MLP). This network type has been proven to be universal function approximators [11], Another important feature of MLP is the ability to generalization. Therefore MLP can be powerful tool for design of intrusion detection systems. [Pg.368]

The basic learning mechanism for networks of multilayer neurons is the generalized delta rule, commonly referred to as back propagation. This learning rule is more complex than that employed with the simple perceptron unit because of the greater information content associated with the continuous output variable compared with the binary output of the perceptron. [Pg.150]

There are some other weight functions that are used to search for functional signals, for example, weights can be received by optimization procedures such as perceptrons or neural networks [29, 30]. Also, different position-specific probability distributions p can be considered. One typical generalization is to use position-specific probability distributions pf of k-base oligonucleotides (instead of mononucleotides), another one is to exploit Markov chain models, where the probability to generate a particular nucleotide xt of the signal sequence depends on k0 1 previous bases (i.e. [Pg.87]

For m input variables the pseudo-dimension for prediction by a multilayer perceptron neural network requires that at least m+1 independent samples are available per node for building a model (Sontag, 1998 Schmitt, 2001). It therefore appears that a larger set of data points is required to fit nonlinear models, such as neural networks that generally have a large number of parameters (weights) to fit. [Pg.440]

For further information about basic concepts pertaining to artificial neural networks in general, and to multilayer perceptrons in particular, the reader is referred to specialised monographs such as White (1992), Hagan et al. (1996), Mehrotra et al. (1996) and Haykin (1999). An overview of traditional kinds of ANN applications in chemistry can most easily be obtained from the books by Zupan and Gasteiger (1993, 1999), and from the survey papers by Meissen et al. (1994), Smits et al. (1994) and Henson (1998). [Pg.90]


See other pages where Generalized perceptron is mentioned: [Pg.105]    [Pg.510]    [Pg.517]    [Pg.548]    [Pg.549]    [Pg.551]    [Pg.251]    [Pg.159]    [Pg.573]    [Pg.195]    [Pg.154]    [Pg.38]    [Pg.217]    [Pg.154]    [Pg.235]    [Pg.122]    [Pg.160]    [Pg.137]    [Pg.214]    [Pg.240]    [Pg.63]    [Pg.86]    [Pg.125]    [Pg.348]    [Pg.425]    [Pg.358]    [Pg.93]    [Pg.255]    [Pg.240]    [Pg.268]    [Pg.84]    [Pg.574]    [Pg.476]    [Pg.228]    [Pg.254]    [Pg.475]    [Pg.602]   
See also in sourсe #XX -- [ Pg.256 ]




SEARCH



Perceptron

© 2024 chempedia.info