Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Feed-forward network, artificial neural

Artificial neural networks (ANN) are computing tools made up of simple, interconnected processing elements called neurons. The neurons are arranged in layers. The feed-forward network consists of an input layer, one or more hidden layers, and an output layer. ANNs are known to be well suited for assimilating knowledge about complex processes if they are properly subjected to input-output patterns about the process. [Pg.36]

J. R. M. Smits, W. J. Meissen, L. M. C. Buydens and G. Kateman, Using artificial neural networks for solving chemical problems. Part I multi-layer feed-forward networks, Chemom. Intell. Lab. Syst., 22(2), 1994, 165-189. [Pg.276]

In the previous chapter a simple two-layer artificial neural network was illustrated. Such two-layer, feed-forward networks have an interesting history and are commonly called perceptrons. Similar networks with more than two layers are called multilayer perceptrons, often abbreviated as MLPs. In this chapter the development of perceptrons is sketched with a discussion of particular applications and limitations. Multilayer perceptron concepts are developed applications, limitations and extensions to other kinds of networks are discussed. [Pg.29]

Artificial neural networks consist of different types of layers. There is the input-layer, one or more hidden layers and an output layer. All these layers can consist of one or more neurons. A neuron in a particular layer is connected to all neurons in the next layer, which is why this is called a feed-forward network. In other networks the neurons might be connected otherwise. An example of a different network is a recurrent neural network where there are also links that connect neurons to other neurons in a previous layer. A fully connected network is a network in which all the neurons from one layer are connected to all neurons in the next layer. [Pg.361]

Figure 2 illustrates a so-called controlled learning paradigm where the network is supervised so as to work in a one-step-ahead method in the model predicting the object The feedback can come from two sources, depending on the position of switch FFN-RecN. In the FFN position, the artificial neural network is combined into the structure of feed forward network, while in the RecN position, the network structure corresponds to the recurrent network. [Pg.572]

Figure 3 Feed-back and feed-forward artificial neural networks. Figure 3 Feed-back and feed-forward artificial neural networks.
An artificial neural network (ANN) model was developed to predict the structure of the mesoporous materials based on the composition of their synthesis mixtures. The predictive ability of the networks was tested through comparison of the mesophase structures predicted by the model and those actually determined by XRD. Among the various ANN models available, three-layer feed-forward neural networks with one hidden layer are known to be universal approximators [11, 12]. The neural network retained in this work is described by the following set of equations that correlate the network output S (currently, the structure of the material) to the input variables U, which represent here the normalized composition of the synthesis mixture ... [Pg.872]

Artificial neural networks (ANNs) represent, as opposed to PLS and MLR, a nonlinear statistical analysis technique [86]. The most commonly used N N is of the feed-forward back-propagation type (Figure 14.2). As is the case of both PLS and MLR, there are a few aspects of NN to be considered when using this type of analysis technique ... [Pg.390]

Artificial neural networks (feed-forward neural networks, self-organizing neural networks, counterpropagation neural networks, Bayesian neural networks)... [Pg.217]

D. Cirovic, Trends Anal. Chem., 16,148 (1997). Feed-Forward Artificial Neural Networks Applications to Spectroscopy. [Pg.129]

R. Kocjancic and J. Zupan,/. Chem. Inf. Comput. Set, 37,985 (1997). Application of Feed-Forward Artificial Neural Network as a Mapping Device. [Pg.130]

Schmidt et al. (1999) implemented two-layer feed-forward artificial neural networks for both... [Pg.361]

After successful lesion detection, morphological and dynamic features have to be assessed manually or automatically. These features may then be analyzed to come to a probability of malignancy. By 1997 Abdolmaleki et al. (1997) had already assessed six manually obtained features by a three-layer feed-forward neural network, and showed that methods of artificial intelligence are capable of dif-... [Pg.366]

Neural network has been widely used in fields of function approximation, pattern recognition, image dealing, artificial intelligence, optimization and so on [26, 102]. Multilayer feed forward artificial neural network is a major type of the neural network which is connected by input layer, one or more output layers and hidden layers in a forward way. Each layer is composed of many artificial neurons. The output of previous layer neurons is the input of the next layer as shown in Fig. 2.6. [Pg.28]

A multilayer perceptron (MLP) is a feed-forward artificial neural network model that maps sets of input data onto a set of suitable outputs (Patterson 1998). A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP employs a supervised learning techruque called backpropagation for training the network. MLP is a modification of the standard linear perceptron and can differentiate data that are not linearly separable. [Pg.425]

Using these experimental data, artificial neural network models for predicting fractal dimension have been developed using multi-layer feed-forward back propagation algorithm. [Pg.220]

Apart from this variety of applications, NNs have another important property. It has been shown that artificial neural networks are universal approximators, i.e., NNs can be used to approximate unknown functions of many variables very accurately. Specifically, it has been proven that any continuous, real-valued function of n dimensions can be fitted to arbitrary accuracy by feed-forward neural networks with only one hidden layer. For this purpose neural networks can be regarded as a nested function of rather simple functional elements, which can adapt very accurately to a set of known reference points. No knowledge about the form of the underlying function is required. This function approximation is achieved by optimizing the values of a comparably large number of fitting parameters called weights. [Pg.341]

A basic information about Artificial Neural Networks (ANNs) and their applications was introduced. A special attention was given to description of dynamic processes by mean of ANN. The drying kinetics of agricultural products are presented in the paper. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) network types are proposed for predicting changes of moisture content and temperature of material in during drying in the vibrofluidized bed. Capability of prediction of Artificial Neural Networks is evaluated in feed forward and recurrent structures. [Pg.569]

It is sometimes claimed that SVMs are better than artificial neural networks. This assertion is because SVMs have a unique solution, whereas artificial neural networks can become stuck in local minima and because the optimum number of hidden neurons of ANN requires time-consuming calculations. Indeed, it is true that multilayer feed-forward neural networks can offer models that represent local minima, but they also give constantly good solutions (although suboptimal), which is not the case with SVM (see examples in this section). Undeniably, for a given kernel and set of parameters, the SVM solution is unique. But, an infinite combination of kernels and SVM parameters exist, resulting in an infinite set of unique SVM models. The unique SVM solution therefore brings little comfort to the researcher because the theory cannot foresee which kernel and set of parameters are optimal for a... [Pg.351]


See other pages where Feed-forward network, artificial neural is mentioned: [Pg.179]    [Pg.24]    [Pg.49]    [Pg.126]    [Pg.1318]    [Pg.115]    [Pg.268]    [Pg.180]    [Pg.180]    [Pg.259]    [Pg.325]    [Pg.34]    [Pg.372]    [Pg.977]    [Pg.421]    [Pg.339]    [Pg.227]    [Pg.361]    [Pg.364]    [Pg.491]    [Pg.343]    [Pg.381]    [Pg.79]    [Pg.157]   


SEARCH



Artificial Neural Network

Artificial feed

Artificial network

Feed-forward

Feed-forward networks

Feed-forward neural network

Forward

Forwarder

Neural artificial

Neural feed-forward

Neural network

Neural networking

© 2024 chempedia.info