Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training radial basis function networks

The same limitations that apply for multilayer perception networks, in general, hold for radial basis function networks. Training time is enhanced with radial basis function networks, but application is slower, due to the complexity of the calculations. Radial basis function networks require supervised training and hence are limited to those applications for which training data and answers are available. Several books are listed in the reference section with excellent descriptions of radial basis function networks and applications (Beale Jackson, 1991 Fu, 1994 Wasserman, 1993). [Pg.46]

In Part II of this book we have encountered three network architectures that require supervised learning perceptions, multilayer perceptions and radial basis function networks. Training for perceptions and multilayer perceptions is similar. The goal of... [Pg.51]

Not all neural networks are the same their connections, elemental functions, training methods and applications may differ in significant ways. The types of elements in a network and the connections between them are referred to as the network architecture. Commonly used elements in artificial neural networks will be presented in Chapter 2. The multilayer perception, one of the most commonly used architectures, is described in Chapter 3. Other architectures, such as radial basis function networks and self organizing maps (SOM) or Kohonen architectures, will be described in Chapter 4. [Pg.17]

Networks based on radial basis functions have been developed to address some of the problems encountered with training multilayer perceptrons radial basis functions are guaranteed to converge and training is much more rapid. Both are feed-forward networks with similar-looking diagrams and their applications are similar however, the principles of action of radial basis function networks and the way they are trained are quite different from multilayer perceptrons. [Pg.41]

The only difficult part is finding the values for p. and o for each hidden unit, and the weights between the hidden and output layers, Le., training the network. This will be discussed later, in Chapter 5. At this point, it is sufficient to say that training radial basis function networks is considerably faster than training multilayer perceptrons. On the other hand, once trained, the feed-forward process for multilayer perceptrons is faster than for radial basis function networks. [Pg.44]

Perceptions, multilayer perceptions and radial basis function networks require supervised training with data for which the answers are known. Some applications require the automatic clustering of data, data for which the clusters and clustering criteria are not known. One of the best known architectures for such problems is the Kohonen selforganizing map (SOM), named after its inventor, Teuvo Kohonen (Kohonen, 1997). In this section the rationale behind such networks is described. [Pg.46]

Radial basis function networks and multilayer perceptions have similar functions but their training algorithms are dramatically different. Training radial basis function networks proceeds in two steps. First the hidden layer parameters are determined as a function of the input data and then the weights between the hidden and output layers are determined from the output of the hidden layer and the target data. [Pg.58]

This process assumes advance knowledge of how many basis functions (or data clusters) will be required to appropriately partition the data space. There are numerous heuristic methods of addressing this issue. Since training radial basis function networks is rapid, it is easy to start with a small number of centers and iteratively increase the number until no further benefit is noticed. Note that the dimension of the basis function means is the same as the dimension of the input data vectors. [Pg.59]

The second part of training radial basis function networks assumes that the number of basis functions, i.e., the number of hidden units, and their center and variability parameters have been determined. Then all that remains is to find the linear combination of weights that produce the desired output (target) values for each input vector. Since this is a linear problem, convergence is guaranteed and computation proceeds rapidly. This task can be accomplished with an iterative technique based on the perception training rule, or with various other numerical techniques. Technically, the problem is a matrix inversion problem ... [Pg.59]

It can thus be seen that training radial basis function networks uses both supervised and unsupervised learning determining basis function parameters is unsupervised and solution of the linear equations is supervised. [Pg.59]

The second step of training is the error backpropagation algorithm carried on only for the output layer. Since this is a supervised algorithm for one layer only, the training is very rapid, 100-1000 times faster than in the backpropagation multilayer network. This makes the radial basis-function network very attractive. Also, this network can be easily modeled using computers, however, its hardware implementation would be difficult. [Pg.2053]

Radial basis functions networks are good function approximation and classification as backpropagation networks but require much less time to train and don t have as critical local minima or connection weight freezing (sometimes called network paralysis) problems. Radial basis fimction CNNs are also known to be universal approximators and provide a convenient measure of the reliability and confidence of its output (based on the density of training data). In addition, the functional equivalence of these networks with fuzzy inference systems have shown that the membership functions within a rule are equivalent to Gaussian functions with the same variance (o ) and the munber of receptive field nodes is equivalent to the number of fuzzy if-then rules. [Pg.29]

Afantitis et al. investigated the use of radial basis function (RBF) neural networks for the prediction of Tg [140]. Radial basis functions are real-valued functions, whose value only depends on their distance from an origin. Using the dataset and descriptors described in Cao s work [130] (see above), RBF networks were trained. The best performing network models showed high correlations between predicted and experimental values. Unfortunately the authors do not formally report an RMS error, but a cursory inspection of the reported data in the paper would suggest approximate errors of around 10 K. [Pg.138]

To determine whether alternative ANN architectures can lead to improved resolution and successful agent detection, Radial Basis Function (RBF) networks [106] were considered for the same problem. RBFs are networks with one hidden layer associated with a specific, analytically known function. Each hidden layer node corresponds to a numerical evaluation of the chosen function at a set of parameters Gaussian waveforms are often the functions of choice in RBFs. The outputs of the nodes are multiplied by weights, summed, and added to a linear combination of the inputs, yielding the network outputs. The unknown parameters (multiplicative weights, means and spreads for the Gaussians, and coefficients for the linear combination of the inputs) are determined by training the RBF network to produce desired outputs for specific inputs. [Pg.361]

Initially, networks were trained from data obtained from the experimental design conditions given in Figure 7.3. These were radial basis function (RBF) networks, multilayer perception (MLP) networks, probabilistic neural networks (PNNs), and generalized regression neural networks (GRNNs), as well... [Pg.174]

The ANNs were developed in an attempt to imitate, mathematically, the characteristics of the biological neurons. They are composed by intercoimected artificial neurons responsible for the processing of input-output relationships, these relationships are learned by training the ANN with a set of irqmt-output patterns. The ANNs can be used for different proposes approximation of functions and classification are examples of such applications. The most common types of ANNs used for classification are the feedforward neural networks (FNNs) and the radial basis function (RBF) networks. Probabilistic neural networks (PNNs) are a kind of RBFs that uses a Bayesian decision strategy (Dehghani et al., 2006). [Pg.166]


See other pages where Training radial basis function networks is mentioned: [Pg.99]    [Pg.540]    [Pg.540]    [Pg.47]    [Pg.156]    [Pg.164]    [Pg.184]    [Pg.111]    [Pg.112]    [Pg.540]    [Pg.258]    [Pg.96]    [Pg.30]    [Pg.245]    [Pg.57]    [Pg.251]    [Pg.57]    [Pg.51]    [Pg.235]    [Pg.132]    [Pg.423]    [Pg.241]    [Pg.1932]    [Pg.99]    [Pg.218]    [Pg.839]    [Pg.29]    [Pg.93]    [Pg.346]    [Pg.269]    [Pg.255]    [Pg.56]    [Pg.123]   
See also in sourсe #XX -- [ Pg.44 , Pg.58 ]




SEARCH



Basis function radial

Basis functions

Network functionality

Radial basis function networks

Training network

© 2024 chempedia.info