Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Radial basis function neural

Girosi, F., and Anzellotti, G., Rales of convergence for radial basis functions and neural networks. Artificial Neural Networks with Applications in Speech and Vision, (R. J. Matttmone, ed.), p. 97. Chapman Hall, London, 1993. [Pg.204]

J. Park and I.W. Sandberg, Universal approximation using radial basis function networks. Neural Computation, 3 (1991) 246-257. [Pg.698]

B. Carse and T.C. Fogarty, Fast evolutionary learning of minimal radial basis function neural networks using a genetic algorithm. Lecture Notes in Computer Science, 1143, (1996) 1-22. [Pg.698]

All of the studies above have used back propagation multilayer perceptrons and many other varieties of neural network exist that have been applied to PyMS data. These include minimal neural networks,117119 radial basis functions,114120 self-organizing feature maps,110121 and autoassociative neural networks.122123... [Pg.332]

Models of the form y =f(x) or v =/(x1, x2,..., xm) can be linear or nonlinear they can be formulated as a relatively simple equation or can be implemented as a less evident algorithmic structure, for instance in artificial neural networks (ANN), tree-based methods (CART), local estimations of y by radial basis functions (RBF), k-NN like methods, or splines. This book focuses on linear models of the form... [Pg.118]

Afantitis et al. investigated the use of radial basis function (RBF) neural networks for the prediction of Tg [140]. Radial basis functions are real-valued functions, whose value only depends on their distance from an origin. Using the dataset and descriptors described in Cao s work [130] (see above), RBF networks were trained. The best performing network models showed high correlations between predicted and experimental values. Unfortunately the authors do not formally report an RMS error, but a cursory inspection of the reported data in the paper would suggest approximate errors of around 10 K. [Pg.138]

We view the real or the simulated system as a black box that transforms inputs into outputs. Experiments with such a system are often analyzed through an approximating regression or analysis of variance model. Other types of approximating models include those for Kriging, neural nets, radial basis functions, and various types of splines. We call such approximating models metamodels other names include auxiliary models, emulators, and response surfaces. The simulation itself is a model of some real-world system. The goal is to build a parsimonious metamodel that describes the input-output relationship in simple terms. [Pg.288]

Not all neural networks are the same their connections, elemental functions, training methods and applications may differ in significant ways. The types of elements in a network and the connections between them are referred to as the network architecture. Commonly used elements in artificial neural networks will be presented in Chapter 2. The multilayer perception, one of the most commonly used architectures, is described in Chapter 3. Other architectures, such as radial basis function networks and self organizing maps (SOM) or Kohonen architectures, will be described in Chapter 4. [Pg.17]

Other sigmoidal functions, such as the hyperbolic tangent function, are also commonly used. Finally, Radial Basis Function neural networks, to be described later, use a symmetric function, typically a Gaussian function. [Pg.25]

There are literally dozens of kinds of neural network architectures in use. A simple taxonomy divides them into two types based on learning algorithms (supervised, unsupervised) and into subtypes based upon whether they are feed-forward or feedback type networks. In this chapter, two other commonly used architectures, radial basis functions and Kohonen self-organizing architectures, will be discussed. Additionally, variants of multilayer perceptrons that have enhanced statistical properties will be presented. [Pg.41]

Despite the fact that the neural network literature increasingly contains examples of radial basis function network applications, their use in genome informatics has rarely been -reported—not because the potential for applications is not there, but more likely due to a lag time between development of the technology and applications to a given field. Casidio et al. (1995) used a radial basis function network to optimally predict the free energy contributions due to hydrogen bonds, hydrophobic interactions and the unfolded state, with simple input measures. [Pg.46]

One other network that has been used with supervised learning is the radial basis function (RBF) network.f Radial functions are relatively simple in form, and by definition must increase (or decrease) monotonically with the distance from a certain reference point. Gaussian functions are one example of radial functions. In a RBF network, the inputs are fed to a layer of RBFs, which in turn are weighted to produce an output from the network. If the RBFs are allowed to move or to change size, or if there is more than one hidden layer, then the RBF network is non-linear. An RBF network is shown schematically for the case of n inputs and m basis functions in Fig. 3. The generalized regression neural network, a special case of the RBF network, has been used infrequently especially in understanding in vitro-in vivo correlations. [Pg.2401]

Fig. 3 Schematic drawing of radial basis function neural network. Fig. 3 Schematic drawing of radial basis function neural network.
Some historically important artificial neural networks are Hopfield Networks, Per-ceptron Networks and Adaline Networks, while the most well-known are Backpropa-gation Artificial Neural Networks (BP-ANN), Kohonen Networks (K-ANN, or Self-Organizing Maps, SOM), Radial Basis Function Networks (RBFN), Probabilistic Neural Networks (PNN), Generalized Regression Neural Networks (GRNN), Learning Vector Quantization Networks (LVQ), and Adaptive Bidirectional Associative Memory (ABAM). [Pg.59]

Lohninger, H. (1993). Evaluation of Neural Networks Based on Radial Basis Functions and Their Application to the Prediction of Boiling Points from Structural Parameters. J. Chem. Inf. Comput.Sci.,33,736-744. [Pg.609]

Tetteh, J., Suzuki, T, Metcalfe, E. and Howells, S. (1999). Quantitative Structure-Property Relationships for the Estimation of Boiling Point and Flash Point Using a Radial Basis Function Neural Network. J.Chem.InfiComput.ScL, 39,491-507. [Pg.653]

Yao XJ, Panaye A, Doucet JP, Chen HF, Zhang RS, et al. Comparative classification study of toxicity mechanisms using support vector machines and radial basis function neural networks. Anal Chim Acta 2005 535 259-73. [Pg.198]

Panaye A, Fan BT, Doucet JP, Yao XJ, Zhang RS, et al. Quantitative structure-toxicity relationships (QSTRs) A comparative study of various nonlinear methods. General regression neural network, radial basis function neural network and support vector machine in predicting toxicity of nitro- and cyano- aromatics to Tetrahymena pyriformis. SAR QSAR Environ Res 2006 17 75-91. [Pg.198]

Various density-functional theory based descriptors were probed with success by Arulmozhiraja and Morita [155]. In 2005, Hirokawa and coworkers employed Hartree-Fock theory, which identifies the polarization as a key parameter for QSAR on AhR binding [156]. Wang et al. extended their work to include polybrominated compounds and reached a cross-validated r2 of 0.580 and 0.680 using CoMFA and CoMSIA, respectively [157], Zheng and coworkers [158] employed radial basis function neural networks and obtained... [Pg.333]


See other pages where Radial basis function neural is mentioned: [Pg.99]    [Pg.540]    [Pg.57]    [Pg.61]    [Pg.250]    [Pg.251]    [Pg.141]    [Pg.387]    [Pg.176]    [Pg.540]    [Pg.325]    [Pg.337]    [Pg.124]    [Pg.57]    [Pg.61]    [Pg.51]    [Pg.156]    [Pg.183]    [Pg.184]    [Pg.27]    [Pg.234]    [Pg.235]    [Pg.111]    [Pg.20]   
See also in sourсe #XX -- [ Pg.649 , Pg.670 , Pg.681 ]




SEARCH



Basis function radial

Basis functions

© 2024 chempedia.info