Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Function kernel

In Eq (25), the integration kernel function K x)= rr x is smooth everywhere except the singularity point (x = 0). In a numerical analysis, the integration has to be evaluated in discrete form over a grid with the mesh size h... [Pg.123]

The problem is, however, the evaluation (36) may create a considerable error due to the singularity of the kernel function. In these cases, a correction process has to be applied. Here a two-level grid system with H=2h is used as an example to illustrate the approach of correction. [Pg.123]

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

The output of a hidden unit, in case of a Gaussian kernel function is defined as ... [Pg.681]

The output of these hidden nodes, o, is then forwarded to all output nodes through weighted connections. The output yj of these nodes consists of a linear combination of the kernel functions ... [Pg.682]

The centroids represent the positions of Gauss kernels and are in this example positioned in the same place as the objects in the input space. The width factors do not change during training. The weights of each kernel function are obtained by training. [Pg.683]

In order to use the notional particles to estimate f x, we need a method to identify a finite sample of notional particles in the neighborhood of x on which to base our estimate. In transported PDF codes, this can be done by introducing a kernel function hw(s) centered at 5 = 0 with bandwidth W. For example, a so-called constant kernel function (Wand and lones 1995) can be employed ... [Pg.320]

For simplicity, we will let W = 1/(2M), i.e the bandwidth is inversely proportional to the number of grid cells. Another alternative is to use the M grid cells to define the kernel function ... [Pg.320]

Note that since the cell size is equal to 2W, the bandwidth for / , is the same as for hw. Other widely used kernel functions are described in Chapter 7. [Pg.320]

Using the kernel function, we can define the number of notional particles at point x by... [Pg.320]

Using the spatial grid kernel function, (6.207), the estimated histogram is given by... [Pg.326]

One of the simplest estimation techniques is to use a kernel function hw(x) (see, for example, (6.206), p. 301). However, care must be taken in choosing the form of the kernel function in order to ensure that desirable physical constraints are not violated. For example, with unit-weight particles, the requirement that the mixing model in (7.28)... [Pg.367]

In general, very few kernel functions will satisfy this condition. For example, it is easily shown that the grid-cell kernel... [Pg.368]

The kernel K(x,x — y) must satisfy this constraint for any integrable function (p. This is just the definition of the Dirac delta function K(x, x — y) = S(x — y). Note that, in this limit, the kernel function is equivalent to the filter function used in LES. As is well known in LES, filtering a function twice leads to different results unless the integral condition given above is satisfied. [Pg.368]

The kernel function could also be made simpler by using... [Pg.313]

The critical step of the inverse problem is a projection of the unknown function onto the base formed by the kernel functions. This base is not orthogonal and, because the number of kernel functions is finite, cannot describe perfectly the unknown function. For that reason, we write that the unknown function x /SC x) is the sum of its projections with component u, along the j th kernel functions plus any orthogonal complement labeled x fiC x) and belonging to the null-space of the kernel functions... [Pg.314]

Here, K x, tiij, s ) are the kernel functions with prototypes m and scale parameters sr For example, if the kernel function is the standard normal density function [Pg.183]

FIGURE 4.28 Visualization of kernel regression with two Gaussian kernels. The point sizes reflect the influence on the regression model. The point sizes in the left plot are for the solid kernel function, those in the right plot are for the dashed kernel function. [Pg.184]

There are various procedures to estimate the unknown regression parameters and the parameters for the kernel functions. One approach is to estimate prototypes ntj and scale parameters Sj separately by clustering methods, and then to estimate the regression parameters, however, this approach does not incorporate information of the y-variable. Another approach is to use optimization techniques to minimize the RSS for the residuals y, - fix,) obtained via Equation 4.95, for i = 1,..., n. [Pg.184]

An algorithm for computing the decision boundary thus requires the choice of the kernel function frequently chosen are radial basis functions (RBFs). A further input parameter is the priority of the size constraint for used in the optimization problem (Equation 5.38). This constraint is controlled by a parameter that is often denoted by y. A large value of y forces the size of to be small, which can lead to an overht and to a wiggly... [Pg.241]

In the following example, the effect of these parameter choices will be demonstrated. We use the same example as in Section 5.5 with two overlapping groups. Figure 5.20 shows the resulting decision boundaries for different kernel functions... [Pg.241]

The most important parameter choices for SVMs (Section 5.6) are the specification of the kernel function and the parameter y controlling the priority of the size constraint of the slack variables (see Section 5.6). We selected RBFs for the kernel because they are fast to compute. Figure 5.27 shows the misclassification errors for varying values of y by using the evaluation scheme described above for k-NN classification. The choice of y = 0.1 is optimal, and it leads to a test error of 0.34. [Pg.252]

Froehlich, H., Wegner, J.K., Sieker, F. and Zell, A. (2006) Kernel functions for attributed molecular graphs - a new similarity-based approach to ADME prediction in classification and regression. QSAR Combinatorial Science, 25, 317-326. [Pg.40]


See other pages where Function kernel is mentioned: [Pg.103]    [Pg.123]    [Pg.681]    [Pg.682]    [Pg.683]    [Pg.684]    [Pg.164]    [Pg.365]    [Pg.207]    [Pg.368]    [Pg.368]    [Pg.313]    [Pg.314]    [Pg.314]    [Pg.183]    [Pg.184]    [Pg.240]    [Pg.241]    [Pg.241]    [Pg.247]    [Pg.136]    [Pg.92]    [Pg.80]    [Pg.124]    [Pg.195]   
See also in sourсe #XX -- [ Pg.681 ]

See also in sourсe #XX -- [ Pg.301 , Pg.348 , Pg.349 ]

See also in sourсe #XX -- [ Pg.225 , Pg.227 , Pg.405 ]

See also in sourсe #XX -- [ Pg.419 ]

See also in sourсe #XX -- [ Pg.301 , Pg.348 , Pg.349 ]

See also in sourсe #XX -- [ Pg.236 ]

See also in sourсe #XX -- [ Pg.132 ]

See also in sourсe #XX -- [ Pg.905 ]

See also in sourсe #XX -- [ Pg.16 , Pg.18 , Pg.43 , Pg.53 , Pg.54 , Pg.55 ]

See also in sourсe #XX -- [ Pg.393 ]

See also in sourсe #XX -- [ Pg.270 , Pg.275 ]




SEARCH



Kernel functionals

© 2024 chempedia.info