Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Two-layer perceptrons

Schierle and Otto [63] used a two-layer perceptron with error back-propagation for quantitative analysis in ICP-AES. Also, Schierle et al. [64] used a simple neural network [the bidirectional associative memory (BAM)] for qualitative and semiquantitative analysis in ICP-AES. [Pg.272]

We therefore see that there is no point in designing perceptrons containing more than two hidden layers, since two layers are already enough to generate arbitrarily complex decision regions. [Pg.548]

In the previous chapter a simple two-layer artificial neural network was illustrated. Such two-layer, feed-forward networks have an interesting history and are commonly called perceptrons. Similar networks with more than two layers are called multilayer perceptrons, often abbreviated as MLPs. In this chapter the development of perceptrons is sketched with a discussion of particular applications and limitations. Multilayer perceptron concepts are developed applications, limitations and extensions to other kinds of networks are discussed. [Pg.29]

The field of artificial neural networks is a new and rapidly growing field and, as such, is susceptible to problems with naming conventions. In this book, a perceptron is defined as a two-layer network of simple artificial neurons of the type described in Chapter 2. The term perceptron is sometimes used in the literature to refer to the artificial neurons themselves. Perceptrons have been around for decades (McCulloch Pitts, 1943) and were the basis of much theoretical and practical work, especially in the 1960s. Rosenblatt coined the term perceptron (Rosenblatt, 1958). Unfortunately little work was done with perceptrons for quite some time after it was realized that they could be used for only a restricted range of linearly separable problems (Minsky Papert, 1969). [Pg.29]

Multilayer perceptrons (MLP) are perceptrons with more than two layers, Le., an input layer, an output layer and at least one layer between them. The middle layers are called... [Pg.33]

Neural network architectures 2L/FF = two-layer, feed forward network (i.e., perceptron) 3L or 4L/FF = three- or four-layer, feed-forward network (i.e., multi-layer perceptron). [Pg.104]

If a MLP has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then it is easily justified with linear algebra that any number of layers can be eliminated to the standard two-layer input-output model (see perceptron). What makes a MLP different is that some neurons use a nonlinear activation function which was improved to model the frequency of action potentials, or firing, of biological neurons in the brain. This function is modeled in several ways. [Pg.425]

After the FFT operation there are only 5 data in the power spectrum which are then used in the multi-layer perceptron neural net computation. A two layer neural net with 5 nodes in the input layer, 12 nodes in the hidden layer, and 3 nodes in the output layer is used. [Pg.603]

The failure of the perceptron to handle real-world scientific problems first suggested a wrong model for the brain, and the use of the perceptron was abandoned. Years later, someone realized that the brain can process several independent streams of information simultaneously—it is referred to as a parallel device. Therefore, more than one perceptron may be used in order to accomplish a similar effect. This can be done in two different ways first giving the perceptrons neighbours to form a layer of units which share inputs from the environment and secondly by introducing further layers, each taking as their input, the output from the previous layer. [Pg.728]

The way in which the neurons are interconnected is referred to as the network architecture or topology. A variety of network architectures has been developed for different applications, but one of the most common is the so-called multilayer perceptron (MLP) network, illustrated in Fig. 2. This is a feedforward network, the feed forward meaning that information is passed in one direction through the network from the inputs, through various hidden layers, to the outputs. The inputs are simply the input variables and, as stated above, in the case of formulation these correspond to ingredients, ingredient amounts, and processing conditions. The hidden layers are made up of perceptrons. Typically, one hidden layer is adequate to learn the relationships in most data sets two hidden layers should enable all... [Pg.2400]

Correct identification and classification of sets of linearly inseparable items requires two major changes to the simple perceptron model. Firstly, more than one perceptron unit must be used. Secondly, we need to modify the nature of the threshold function. One arrangement which can correctly solve our four-sample problem is illustrated in Figure 13. Each neuron in the first layer receives its inputs from the original data, applies the weight vector, thresholds the weighted sum and outputs the appropriate value of zero or one. These outputs serve as inputs to the second, output layer. [Pg.148]

Hybrid networks combine the features of two or more types of ANN, the idea being to highlight the strengths and minimize the weaknesses of the different networks. Examples are the Hamming network, which has a perceptron-like and a Hopfield-like layei and the counterpropagation network, which has a Kohonen layer and a Grossberg outstar layer. [Pg.87]

In order to make the system more powerful, and extend its use to more complex learning applications, we need to make it more complex. For example, the perceptron scheme defined above may be used for classifying n linear-ily separable classes of vectors, with n>2, or with classifying two linearily nonseparable classes. The structure obtained is a neural network with a layer of input nodes and a layer of output nodes. It is called the generalized perceptron of Rumelhart, and is represented in Figure... [Pg.256]

The obtained MSE and MAE values are depicted and compared with the average values of the cross-validation MSE on test data in Figure 8.3 (for MLPs with one hidden layer) and Figure 8.4 (for MLPs with two hidden layers). Figure 8.3 and Figme 8.4 show that errors of the approximation computed by the trained multilayer perceptrons for... [Pg.145]

Multilayer Perceptrons and Radial Basis Function Networks are universal approximators. They are examples of non-linear layered feed forward networks. It is therefore not surprising to find that there always exists an RBF network capable of accurately mimicking a specified MLP, or vice versa. However, these two networks differ from each other in several important respects [4] ... [Pg.573]

The level of air pollution was predicted by means of a multilayer perceptron (MLP) network. The network consists of an input layer, one hidden layer with a proper number of neurones, and an output layer [3]. The input to the network consisted of four subsequent components if the climatic vector after projecting onto the directions of main components and the level of pollution from two preceding days. [Pg.741]

Aires-de-Sousa and Gasteiger used four regression techniques [multiple linear regression, perceptron (a MLF ANN with no hidden layer), MLF ANN, and v-SVM regression] to obtain a quantitative structure-enantioselectivity relationship (QSER). The QSER models the enantiomeric excess in the addition of diethyl zinc to benzaldehyde in the presence of a racemic catalyst and an enan-tiopure chiral additive. A total of 65 reactions constituted the dataset. Using 11 chiral codes as model input and a three-fold cross-validation procedure, a neural network with two hidden neurons gave the best predictions ANN 2 hidden neurons, R pred = 0.923 ANN 1 hidden neurons, R pred = 0.906 perceptron, R pred = 0.845 MLR, R p .d = 0.776 and v-SVM regression with RBF kernel, R pred = 0.748. [Pg.377]


See other pages where Two-layer perceptrons is mentioned: [Pg.195]    [Pg.132]    [Pg.63]    [Pg.195]    [Pg.132]    [Pg.63]    [Pg.509]    [Pg.538]    [Pg.548]    [Pg.662]    [Pg.250]    [Pg.119]    [Pg.35]    [Pg.182]    [Pg.154]    [Pg.234]    [Pg.1796]    [Pg.160]    [Pg.187]    [Pg.104]    [Pg.104]    [Pg.84]    [Pg.542]    [Pg.602]    [Pg.1467]    [Pg.660]    [Pg.573]    [Pg.147]    [Pg.427]    [Pg.100]    [Pg.573]   
See also in sourсe #XX -- [ Pg.63 ]




SEARCH



Perceptron

Two-layer

© 2024 chempedia.info