Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

RBF networks

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

In chemical practice, problems are far more complex than the example problems described in the previous section. To be able to apply RBF networks properly, it is useful to have a considerable amount of prior knowledge about the... [Pg.684]

Linear inner relation (Equation 4.65) is changed to a nonlinear inner relation, i.e., the y-scores have no longer a linear relation to the x-scores but a nonlinear one. Several approaches for modeling this nonlinearity have been introduced, like the use of polynomial functions, splines, ANNs, or RBF networks (Wold 1992 Wold et al. 1989). [Pg.176]

Afantitis et al. investigated the use of radial basis function (RBF) neural networks for the prediction of Tg [140]. Radial basis functions are real-valued functions, whose value only depends on their distance from an origin. Using the dataset and descriptors described in Cao s work [130] (see above), RBF networks were trained. The best performing network models showed high correlations between predicted and experimental values. Unfortunately the authors do not formally report an RMS error, but a cursory inspection of the reported data in the paper would suggest approximate errors of around 10 K. [Pg.138]

To determine whether alternative ANN architectures can lead to improved resolution and successful agent detection, Radial Basis Function (RBF) networks [106] were considered for the same problem. RBFs are networks with one hidden layer associated with a specific, analytically known function. Each hidden layer node corresponds to a numerical evaluation of the chosen function at a set of parameters Gaussian waveforms are often the functions of choice in RBFs. The outputs of the nodes are multiplied by weights, summed, and added to a linear combination of the inputs, yielding the network outputs. The unknown parameters (multiplicative weights, means and spreads for the Gaussians, and coefficients for the linear combination of the inputs) are determined by training the RBF network to produce desired outputs for specific inputs. [Pg.361]

One other network that has been used with supervised learning is the radial basis function (RBF) network.f Radial functions are relatively simple in form, and by definition must increase (or decrease) monotonically with the distance from a certain reference point. Gaussian functions are one example of radial functions. In a RBF network, the inputs are fed to a layer of RBFs, which in turn are weighted to produce an output from the network. If the RBFs are allowed to move or to change size, or if there is more than one hidden layer, then the RBF network is non-linear. An RBF network is shown schematically for the case of n inputs and m basis functions in Fig. 3. The generalized regression neural network, a special case of the RBF network, has been used infrequently especially in understanding in vitro-in vivo correlations. [Pg.2401]

Essentially, the neurofuzzy architecture is a neural network with two additional layers for fuzzification and defuzzification. The fuzzification and input weighting are illustrated graphically in Fig. 9, adapted from the thesis of Bossley. It can be seen that there are similarities with the RBF network, except now the radial functions are replaced by the multivariate membership functions. [Pg.2404]

However, Eq. (5.9) is usually very expensive to implement if the number of the data points is large. Thus a generalized RBF network is usually adopted of the following form ... [Pg.139]

Initially, networks were trained from data obtained from the experimental design conditions given in Figure 7.3. These were radial basis function (RBF) networks, multilayer perception (MLP) networks, probabilistic neural networks (PNNs), and generalized regression neural networks (GRNNs), as well... [Pg.174]

Figure 2 depicts the struetirre of RBF neural networks. RBF networks were introduced into the neural network literature by Broomhead and Lowe (1988). The RBF network model imitates the locally tuned response observed in biological neurons. Neurons with a locally tuned response characteristic can be found in several parts of the nervous system, for example, in cells in the visual cortex sensitive to bars oriented in a certain direction or other visual featirres within a small region of the visual field. These locally tuned neurons show the response characteristics bounded to a small range of the input space. The theoretical basis of the RBF approach lies in the field of interpolation of multivariate functions. The objective of... [Pg.424]

Instead of storing all training facts, as the memory based methods do, it is desirable to form prototypes by remembering a number of facts clustered near each other as one memory object. Neural networks that use localized functions, such as the RBF networks [11], are capable of such a representation. However, their classification decisions are based only on the output values of the network, not on the distance between the data and the known prototype clusters. The theory behind RBF and similar networks [15] does not help to see the problem as that of a geometrical description of complex... [Pg.336]

Mixed-integer programming hyperboxes classification, Bayes Network, Naive Bayes, Liblinear, LibSVM, RBF network, SMO, Logistic, IBk, Bagging, Ensemble selection, Logit Boost, LMT, NBTree, Random Forest, DTNB... [Pg.325]

In 2008 Borah et al. [38] proposed that Neural Network based E-Nose, comprising of an array of four tin-oxide gas sensors, can assist tea quality monitoring during quality grading, principal component analysis (PCA) was used to visualise the different aroma profiles. In addition, K-means and Kohonen s self organising map (SOM) cluster analysis was done, multi layer Perceptron (MLP) network, radial basis function (RBF) network, and constructive probabilistic neural network (CPNN) were used for aroma classification [38]. [Pg.106]

B. Tudu, A. Jana, A. Metla, D. Ghosh, N. Bhattacharyya, R. Bandyopadhyay, Electronic nose for black tea quedity evaluation by an incremental RBF network. Sens. Actuators B Chem. 138(1), 90-95 (2009)... [Pg.116]

A multilayered feed forward neural network with input, hidden and output layer is chosen. The choice follows recommendation of Hurtado Alvares (2001), which argue that radial basic functions (RBF) networks are not suitable for bifurcation problems. [Pg.1312]

No single paradigm of the various ML-based modelling methods, such as ANNs, SVR and GP, is capable of consistent out-performance in every modelling task. It is therefore at most important to utilize and compare the performance of all the ML methods for a particular modelling task to arrive at the best possible model. Within a class of methods such as ANNs, there exist multiple architectures (e g. MLP and RBF networks) for performing nonlinear function approximation and supervised classification tasks. Accordingly, all such alternatives within a class of ML methods also need to be tested. [Pg.191]

RBF networks have seen few chemical applications.They were better than multiple linear regression at predicting boiling points from structural parameters. (Reference 191 also contains a good description of the RBF method.) RBF networks resulted in better calibration than partial least squares in the determination of blood glucose by near-infrared spectroscopy. [Pg.100]

This paper compares three types of networks the ME, BP and RBF networks. The S. cerevisae yeast serves as an example to illustrate the application of the networks. The main objective of this study is to verify if modular network architectures, which are supposed to be able to perform task decomposition, are able to discriminate between reaction pathways in complex biological reaction schemes. [Pg.840]

The fuzzy means algorithm, which is an efficient method for developing RBF network models for dynamic systems, has already been proposed in a recent publication (Sarimveis et al., 2002). Though this algorithm presents some remarkable advantages... [Pg.995]


See other pages where RBF networks is mentioned: [Pg.682]    [Pg.682]    [Pg.683]    [Pg.684]    [Pg.687]    [Pg.692]    [Pg.361]    [Pg.132]    [Pg.133]    [Pg.429]    [Pg.145]    [Pg.240]    [Pg.243]    [Pg.258]    [Pg.258]    [Pg.99]    [Pg.99]    [Pg.99]    [Pg.125]    [Pg.218]    [Pg.839]    [Pg.839]    [Pg.839]    [Pg.843]    [Pg.843]    [Pg.995]    [Pg.995]   


SEARCH



RBF

Radial basis function networks (RBF

© 2024 chempedia.info