Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Three-layer perceptron

Figure 3.6 Common representation of a three-layer perceptron. Figure 3.6 Common representation of a three-layer perceptron.
The three input parameters to the multi-layer perceptron are the atmospheric pressure and the mole fraction of liquid(Xl) and vapor (Yl) phases. The output parameter is the boiling temperature. [Pg.252]

Neural network architectures 2L/FF = two-layer, feed forward network (i.e., perceptron) 3L or 4L/FF = three- or four-layer, feed-forward network (i.e., multi-layer perceptron). [Pg.104]

Fig. 5. Basic anatomy of (a) a three-layer feedforward perceptron and (b) a state recurrent neural network. Fig. 5. Basic anatomy of (a) a three-layer feedforward perceptron and (b) a state recurrent neural network.
At last but not least, the main goal of the present work is feature search in spontaneous speech and in emotional response aiming at pre-clinical evaluation in order to define tests for AD diagnosis. These features will define the control group (CR) and the three AD levels (ES, IS and AS). A secondary goal is the optimization of computational cost with the aim of making these techniques useful for real-time applications in real environments. Thus automatic classification will be modeled with this in mind. We have used a Multi Layer Perceptron (MLP) with neuron number in hidden layer (NNHL) of max(Attribute/Number + Classes/Number) and training step (TS) NNHL 10. [Pg.276]

Multilayer perceptron (MLP) network with three layers. [Pg.573]

The signal propagation in the MLF networks is similar to that of the perceptron-like networks, described in Section 44.4.1. For each object, each unit in the input layer is fed with one variable of the X matrix and each unit in the output layer is intended to provide one variable of the Y table. The values of the input units are passed unchanged to each unit of the hidden layer. The propagation of the signal from there on can be summarized in three steps. [Pg.664]

Aires-de-Sousa and Gasteiger used four regression techniques [multiple linear regression, perceptron (a MLF ANN with no hidden layer), MLF ANN, and v-SVM regression] to obtain a quantitative structure-enantioselectivity relationship (QSER). The QSER models the enantiomeric excess in the addition of diethyl zinc to benzaldehyde in the presence of a racemic catalyst and an enan-tiopure chiral additive. A total of 65 reactions constituted the dataset. Using 11 chiral codes as model input and a three-fold cross-validation procedure, a neural network with two hidden neurons gave the best predictions ANN 2 hidden neurons, R pred = 0.923 ANN 1 hidden neurons, R pred = 0.906 perceptron, R pred = 0.845 MLR, R p .d = 0.776 and v-SVM regression with RBF kernel, R pred = 0.748. [Pg.377]


See other pages where Three-layer perceptron is mentioned: [Pg.187]    [Pg.63]    [Pg.187]    [Pg.63]    [Pg.159]    [Pg.35]    [Pg.107]    [Pg.109]    [Pg.122]    [Pg.267]    [Pg.154]    [Pg.160]    [Pg.267]    [Pg.157]    [Pg.573]    [Pg.1467]    [Pg.230]    [Pg.1796]    [Pg.427]    [Pg.667]   
See also in sourсe #XX -- [ Pg.63 ]




SEARCH



Perceptron

© 2024 chempedia.info