Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Multilayer feed-forward network

The four experiments done previously with Rnp (= 0.5, 1, 3, 4) were used to train the neural network and the experiment with / exp = 2 was used to validate the system. Dynamic models of process-model mismatches for three state variables (i.e. X) of the system are considered here. They are the instant distillate composition (xD), accumulated distillate composition (xa) and the amount of distillate (Ha). The inputs and outputs of the network are as in Figure 12.2. A multilayered feed forward network, which is trained with the back propagation method using a momentum term as well as an adaptive learning rate to speed up the rate of convergence, is used in this work. The error between the actual mismatch (obtained from simulation and experiments) and that predicted by the network is used as the error signal to train the network as described earlier. [Pg.376]

K. Hosnik, M. Stinchcombe, and M. White, Universal approximation of an unknown mapping and its derivatives using multilayer feed-forward networks. Neural Net., 3 551-560 (1990). [Pg.63]

Design of ANN structure based on multilayer feed-forward network with 4-H-5 stmcture, by 4 nodes of customer group at the input layer, 5 nodes of flavor compounds at the output layer, and H nodes of hidden layer selected with minimum MSE by varying the number of nodes from 1 to 20. [Pg.429]

Multilayer feed-forward neural networks (MLF) represent the type of ANNs most widely applied to electronic tongue data. Their scheme is shown in Fig. 2.17. [Pg.91]

FIGURE 2.17 Scheme of multilayer feed-forward neural networks. [Pg.92]

In the previous chapter a simple two-layer artificial neural network was illustrated. Such two-layer, feed-forward networks have an interesting history and are commonly called perceptrons. Similar networks with more than two layers are called multilayer perceptrons, often abbreviated as MLPs. In this chapter the development of perceptrons is sketched with a discussion of particular applications and limitations. Multilayer perceptron concepts are developed applications, limitations and extensions to other kinds of networks are discussed. [Pg.29]

Networks based on radial basis functions have been developed to address some of the problems encountered with training multilayer perceptrons radial basis functions are guaranteed to converge and training is much more rapid. Both are feed-forward networks with similar-looking diagrams and their applications are similar however, the principles of action of radial basis function networks and the way they are trained are quite different from multilayer perceptrons. [Pg.41]

A multilayered feed forward neural network with input, hidden and output layer is chosen. The choice follows recommendation of Hurtado Alvares (2001), which argue that radial basic functions (RBF) networks are not suitable for bifurcation problems. [Pg.1312]

H. Yang and P. R. Griffiths, Anal. Chem., 71, 751 (1999). Application of Multilayer Feed-Forward Neural Networks to Automated Compound Identification in Low-Resolution Open-Path FT-IR Spectrometry. [Pg.132]

Neural network has been widely used in fields of function approximation, pattern recognition, image dealing, artificial intelligence, optimization and so on [26, 102]. Multilayer feed forward artificial neural network is a major type of the neural network which is connected by input layer, one or more output layers and hidden layers in a forward way. Each layer is composed of many artificial neurons. The output of previous layer neurons is the input of the next layer as shown in Fig. 2.6. [Pg.28]

After successful implementation of conventional model-reference adaptive controllers on smart structures, the next logical step was to investigate the possibility of using a neural network for adaptive control implementations. The linear and nonlinear mapping properties of neural networks have been extensively utilized in the design of multilayered feed-forward neural networks for the implementation of adaptive control algorithms [10]. [Pg.61]

To construct the multilayer feed-forward neural network models, three cutting parameters, namely work-piece speed, longitudinal feed and radial infeed are used as the input neurons and corresponding fractal dimension as the output neuron. Considering the full factorial design, a total of... [Pg.206]

Multilayer Perceptrons and Radial Basis Function Networks are universal approximators. They are examples of non-linear layered feed forward networks. It is therefore not surprising to find that there always exists an RBF network capable of accurately mimicking a specified MLP, or vice versa. However, these two networks differ from each other in several important respects [4] ... [Pg.573]

The feed-forward network can be trained offline in batch mode, using data or a look-up table with any of the training algorithms in Back Propagation. The back propagation algorithm for multilayer networks is a gradient descent optimization procedure in which minimization of a mean square... [Pg.570]

Figure 2 Architecture of a multilayer feed-forward neural network... Figure 2 Architecture of a multilayer feed-forward neural network...
It is sometimes claimed that SVMs are better than artificial neural networks. This assertion is because SVMs have a unique solution, whereas artificial neural networks can become stuck in local minima and because the optimum number of hidden neurons of ANN requires time-consuming calculations. Indeed, it is true that multilayer feed-forward neural networks can offer models that represent local minima, but they also give constantly good solutions (although suboptimal), which is not the case with SVM (see examples in this section). Undeniably, for a given kernel and set of parameters, the SVM solution is unique. But, an infinite combination of kernels and SVM parameters exist, resulting in an infinite set of unique SVM models. The unique SVM solution therefore brings little comfort to the researcher because the theory cannot foresee which kernel and set of parameters are optimal for a... [Pg.351]

An electronic nose and an SVM classifier were evaluated by Distante, Ancona, and Siciliano for the recognition of pentanone, hexanal, water, acetone, and three mixtures of pentanone and hexanal in different concentrations. In a LOO test, the SVM classifier with a degree 2 polynomial kernel gave the best predictions SVM 4.5% error, RBF neural network 15% error, and multilayer feed-forward ANN 40% error. [Pg.382]

Multilayer perceptron feed-forward network predicting the decay of permeate flux in pulsating conditions (10 neurons in the first hidden layer (input layer), 6 neurons in the second hidden layer, 3 neurons in the third hidden layer, 1 neuron in the output layer). [Pg.580]

Wang Z., Massimo C. D.,Tham M.T., Morris A.J., A procedure for determining the topology of multilayer feed-forward neural networks. Neural Network, 1994, 7(2), 291-300. [Pg.596]

Let us start with a classic example. We had a dataset of 31 steroids. The spatial autocorrelation vector (more about autocorrelation vectors can be found in Chapter 8) stood as the set of molecular descriptors. The task was to model the Corticosteroid Ringing Globulin (CBG) affinity of the steroids. A feed-forward multilayer neural network trained with the back-propagation learning rule was employed as the learning method. The dataset itself was available in electronic form. More details can be found in Ref. [2]. [Pg.206]

There are literally dozens of kinds of neural network architectures in use. A simple taxonomy divides them into two types based on learning algorithms (supervised, unsupervised) and into subtypes based upon whether they are feed-forward or feedback type networks. In this chapter, two other commonly used architectures, radial basis functions and Kohonen self-organizing architectures, will be discussed. Additionally, variants of multilayer perceptrons that have enhanced statistical properties will be presented. [Pg.41]

The only difficult part is finding the values for p. and o for each hidden unit, and the weights between the hidden and output layers, Le., training the network. This will be discussed later, in Chapter 5. At this point, it is sufficient to say that training radial basis function networks is considerably faster than training multilayer perceptrons. On the other hand, once trained, the feed-forward process for multilayer perceptrons is faster than for radial basis function networks. [Pg.44]

The way in which the neurons are interconnected is referred to as the network architecture or topology. A variety of network architectures has been developed for different applications, but one of the most common is the so-called multilayer perceptron (MLP) network, illustrated in Fig. 2. This is a feedforward network, the feed forward meaning that information is passed in one direction through the network from the inputs, through various hidden layers, to the outputs. The inputs are simply the input variables and, as stated above, in the case of formulation these correspond to ingredients, ingredient amounts, and processing conditions. The hidden layers are made up of perceptrons. Typically, one hidden layer is adequate to learn the relationships in most data sets two hidden layers should enable all... [Pg.2400]

Larger architectures emerged since then, among them, the feed-forward multilayer perceptron (MLP) network has became the most popular network architecture (Hertz et al. 1991). The disposition of neurons in such ANN is quite different from the disposition in the brain they are disposed in layers with different number of neurons each. Layers are named according to their position in the architecture an MLP network has an input layer, an output layer and one or more hidden layers between them. Interconnection between neurons is accomplished by weighted connections that represent the synaptic efficacy of a biological neuron. [Pg.144]

A neural network has the advantage that it is a universal approximator and the inner PLS model is therefore not limited to some predefined functional form. In Qin McAvoy (1992) the neural network PLS (NNPLS) algorithm is introduced by replacing the linear inner relationship in equation (4) with a feed-forward multilayer perceptron neural network, such that... [Pg.437]

During cross-validation of the linear PLS model, model fitting was therefore repeated 18 times (once for each of the 18 rockets) for each latent dimension as the overall complexity increased. In the case of the feed-forward multilayer perceptron neural network, 18 training sessions were required each time a node was added to the hidden layer. [Pg.440]

A multilayer perceptron (MLP) is a feed-forward artificial neural network model that maps sets of input data onto a set of suitable outputs (Patterson 1998). A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP employs a supervised learning techruque called backpropagation for training the network. MLP is a modification of the standard linear perceptron and can differentiate data that are not linearly separable. [Pg.425]

The above formulation was adapted from Hornik (1991). However, also other results concerning approximation by means of feed-forward neural networks (Kurkovd, 1992 Hornik et al, 1994 Pinkus, 1998 Kurkova, 2002 Kainen et al., 2007) rely on essentially the same paradigm the required number of hidden neurons h is unknown the only guarantee is that some h always exists such that a multilayer perceptron with h hidden neurons can compute a function F with desired properties. The actual value of h depends on the approximated dependence D and on the aspects discussed in the previous points — the function space considered and the required degree of closeness between F and D however, the fact that such an h exists is independent of D and of those aspects. [Pg.94]

A basic information about Artificial Neural Networks (ANNs) and their applications was introduced. A special attention was given to description of dynamic processes by mean of ANN. The drying kinetics of agricultural products are presented in the paper. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) network types are proposed for predicting changes of moisture content and temperature of material in during drying in the vibrofluidized bed. Capability of prediction of Artificial Neural Networks is evaluated in feed forward and recurrent structures. [Pg.569]


See other pages where Multilayer feed-forward network is mentioned: [Pg.73]    [Pg.429]    [Pg.73]    [Pg.429]    [Pg.662]    [Pg.205]    [Pg.205]    [Pg.176]    [Pg.366]    [Pg.351]    [Pg.1779]    [Pg.361]    [Pg.390]    [Pg.342]    [Pg.87]    [Pg.251]    [Pg.573]    [Pg.34]    [Pg.179]    [Pg.235]    [Pg.93]    [Pg.701]   
See also in sourсe #XX -- [ Pg.2 , Pg.1300 ]




SEARCH



Feed-forward

Feed-forward networks

Forward

Forwarder

Multilayer feed forward (MLF) networks

Multilayer network

© 2024 chempedia.info