Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gaussian radial basis

L. Kiernan, J.D. Mason and K. Warwick, Robust initialisation of Gaussian radial basis function networks using partitioned k-means clustering. Electron. Lett., 32 (1996) 671-672. [Pg.698]

Figure 4.3 Output from a Gaussian radial basis function for a single input value x. Figure 4.3 Output from a Gaussian radial basis function for a single input value x.
In nonlinearly separable cases, SVM maps the vectors into a higher dimensional feature space using a kernel function K(xh x ). The Gaussian radial basis... [Pg.225]

For datasets that are not linearly separable, support vector machines map the data into higher dimensional space where the training set is separable via some transformation K x < (x). A kernel function K(Xi, x ) = (< (x,), < (x )) computes inner products in some expanded feature space. Some kernel functions such as linear K(Xj, x ) = and Gaussian (radial-basis function) K(Xi, Xj) = exp(— x, —x,jp/2a ) are widely used. [Pg.138]

Shin, M., Sargent, R., and Goel, A. Gaussian radial basis functions for simulation meta-modehng. In Proceedings of the Winter Simulation Conference, 2002, volume 1, pages 483-488, 2002. [Pg.224]

Traditional AAKR adopts as Ker function the Gaussian Radial Basis Function (RBF) with bandwidth parameter h, i.e. ... [Pg.918]

Kainen, P.C., Kurkova, V. and Sanguineti, M. (2007). Estimates of approximation rates by gaussian radial-basis functions. In Behcz3mski, B., Iwanovski, M. and Ribeiro, B. (eds.), Adaptive and Natural Computing Algorithms, Springer, Berlin, pp. 11-18. [Pg.111]

Figure 25 SVM classification models for the dataset from Table 3 (a) Gaussian radial basis function kernel, a = 1, Eq. [66] (b) B spline kernel, degree 1, Eq. [72]. Figure 25 SVM classification models for the dataset from Table 3 (a) Gaussian radial basis function kernel, a = 1, Eq. [66] (b) B spline kernel, degree 1, Eq. [72].
Frequently the exclusive use of the RBF kernel is rationalized by mentioning that it is the best possible kernel for SVM models. The simple tests presented in this chapter (datasets from Tables 1-6) surest that other kernels might be more useful for particular problems. For a comparative evaluation, we review below several SVM classification models obtained with five important kernels (linear, polynomial, Gaussian radial basis function, neural, and anova) and show that the SVM prediction capability varies significantly with the kernel type and parameters values used and that, in many cases, a simple linear model is more predictive than nonlinear kernels. [Pg.352]

In this section, we compared the prediction capabilities of five kernels, namely linear, polynomial, Gaussian radial basis function, nemal, and anova. Several guidelines that might help the modeler obtain a predictive SVM model can be extracted from these results (1) It is important to compare the predictions of a large number of kernels and combinations of parameters (2) the linear kernel should be used as a reference to compare the results from nonlinear kernels (3) some datasets can be separated with a linear hyperplane in such instances, the use of a nonlinear kernel should be avoided and (4) when the relationships between input data and class attribution are nonlinear, RBF kernels do not necessarily give the optimum SVM classifier. [Pg.362]

SvmFu, http //five-percent-nation.mit.edu/SvmFu/. SvmFu, by Rifkin, is a C- —I- package for SVM classification. Kernels available include linear, polynomial, and Gaussian radial basis function. [Pg.391]

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

Of the several approaches that draw upon this general description, radial basis function networks (RBFNs) (Leonard and Kramer, 1991) are probably the best-known. RBFNs are similar in architecture to back propagation networks (BPNs) in that they consist of an input layer, a single hidden layer, and an output layer. The hidden layer makes use of Gaussian basis functions that result in inputs projected on a hypersphere instead of a hyperplane. RBFNs therefore generate spherical clusters in the input data space, as illustrated in Fig. 12. These clusters are generally referred to as receptive fields. [Pg.29]

Among these, the most widely used is the radial basis function network (RBFN). The key distinctions among these methods are summarized in Table I and discussed in detail in Bakshi and Utojo (1998). An RBFN is an example of a method that can be used for both input analysis and input-output analysis. As discussed earlier, the basis functions in RBFNs are of the form 0( xj - tm 2), where tm denotes the center of the basis function. One of the most popular basis functions is the Gaussian,... [Pg.40]

When an online interpolator is used to estimate the uncertain term, the interpolation error g can be kept bounded, provided that a suitable interpolator structure is chosen [26, 28], Among universal approximators, Radial Basis Function Interpolators (RBFIs) provide good performance in the face of a relatively simple structure. Hence, Gaussian RBFs have been adopted, i.e.,... [Pg.103]

To determine whether alternative ANN architectures can lead to improved resolution and successful agent detection, Radial Basis Function (RBF) networks [106] were considered for the same problem. RBFs are networks with one hidden layer associated with a specific, analytically known function. Each hidden layer node corresponds to a numerical evaluation of the chosen function at a set of parameters Gaussian waveforms are often the functions of choice in RBFs. The outputs of the nodes are multiplied by weights, summed, and added to a linear combination of the inputs, yielding the network outputs. The unknown parameters (multiplicative weights, means and spreads for the Gaussians, and coefficients for the linear combination of the inputs) are determined by training the RBF network to produce desired outputs for specific inputs. [Pg.361]

Evaluation of the integrals that arise in the calculations was for some time a primary problem in the field. A most important development in this context was the introduction of Gaussian-type basis functions by Boys [32], who showed that all of the integrals in SCF theory could be evaluated analytically if the radial parts of the basis functions were of the form P(x,y, z) exp(-r2). The first ten functions are listed by Hehre, Radom, Schleyer and Pople [33] and we repeat them here ... [Pg.216]

Other sigmoidal functions, such as the hyperbolic tangent function, are also commonly used. Finally, Radial Basis Function neural networks, to be described later, use a symmetric function, typically a Gaussian function. [Pg.25]

The hidden units of a radial basis function network are not the same as used for a multilayer perception, and the weights between input and hidden layer have different meanings. Transfer functions typically used include the Gaussian function, spline functions and various quadratic functions they all are smooth functions, which taper off as distance from a center point increases. In two dimensions, the Gaussian is the well-known bell-shaped curve in three dimensions it forms a hill. [Pg.41]

Figure 4.4 illustrates a radial basis function solution to the hypothetical depression classification problem. The algorithm that implemented the radial basis function application determined that seven cluster sites were needed for this problem. A constant variability term, a2 =. 1, was used for each hidden unit. Shown in the diagram are the two central parameters (because there are two input units) for each of the seven Gaussian functions. [Pg.45]

One other network that has been used with supervised learning is the radial basis function (RBF) network.f Radial functions are relatively simple in form, and by definition must increase (or decrease) monotonically with the distance from a certain reference point. Gaussian functions are one example of radial functions. In a RBF network, the inputs are fed to a layer of RBFs, which in turn are weighted to produce an output from the network. If the RBFs are allowed to move or to change size, or if there is more than one hidden layer, then the RBF network is non-linear. An RBF network is shown schematically for the case of n inputs and m basis functions in Fig. 3. The generalized regression neural network, a special case of the RBF network, has been used infrequently especially in understanding in vitro-in vivo correlations. [Pg.2401]

The Analyze software uses the Kernel PLS method [114] with two key parameters, the number of latent variables and sigma. In this study these values were fixed at 5 and 10, respectively. K-PLS uses kernels and can therefore be seen as a nonlinear extension of the PLS method. The commonly used radial basis function kernel or Gaussian kernel was applied, where the kernel is expressed as [142]... [Pg.407]

Provided the inverse of ( ) exists, the solution w of the interpolation problem can be explicitly calculated, and has the form w = < )-1 y. The most popular and widely used radial basis function is the Gaussian basis function... [Pg.425]

The examples shown in the table list the primitive Gaussians and the splitting schemes for the case of the lithium atom with added p character in the form of an ip-hybrid and then rfip-hybrid character. Note the symbolism used in the labelling 6-31g), which identifies the core linear combination to be comprised of six primitive Gaussians, while the valence orbital representation, 6-3 Ig ), is a contraction to two linear combinations of three and one primitives. Then, the 6-31g ) basis includes the extra polarization effect of one added d Gaussian. In basis set theory, to provide for the individual symmetry characters of the radial functions being modelled it is customary to define six d functions, the normal set of five in atomic orbital theory and then an additional s-function as + z -... [Pg.54]

The transition metal ions, Cu , Ag and Au", all have a d ( 5) electronic state-configuration, with = 3,4 and 5, respectively. The RCEP used here were generated from Dirac-Fock (DF) all electron (AE) relativistic atomic orbitals, and therefore implicitly include the indirect relativistic effects of the core electron on the valence electrons, which in these metal ion systems are the major radial scaling effect. In these RCEP the s p subshells are included in the valence orbital space together with the d, ( + l)s and ( + l)p atomic orbitals and all must be adequately represented by basis functions. The need for such semi-core or semi-valence electrons to be treated explicitly together with the traditional valence orbitals for the heavier elements has been adequately documented The gaussian function basis set on each metal atom consists of the published 4 P3 distribution which is double-zeta each in the sp and n + l)sp orbital space, and triple-zeta for the nd electrons. [Pg.4]

The Gaussian kernel is used in potential function classifiers, also known as radial basis function networks. A sigmoid kernel implements a multilayer perceptron (cf. Section 8.2) with a single hidden layer. [Pg.200]


See other pages where Gaussian radial basis is mentioned: [Pg.98]    [Pg.874]    [Pg.42]    [Pg.143]    [Pg.435]    [Pg.331]    [Pg.389]    [Pg.98]    [Pg.874]    [Pg.42]    [Pg.143]    [Pg.435]    [Pg.331]    [Pg.389]    [Pg.61]    [Pg.183]    [Pg.325]    [Pg.61]    [Pg.156]    [Pg.183]    [Pg.141]    [Pg.229]    [Pg.346]    [Pg.66]    [Pg.5]    [Pg.437]    [Pg.164]   
See also in sourсe #XX -- [ Pg.225 ]




SEARCH



Gaussian basis

© 2024 chempedia.info