Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linearly separable data

Fig. 3.5 a A schematic of possible hyperplanes for linearly separable data, b Optimum hyperplane located by SVM and the corresponding support vectors... [Pg.138]

Figure 3 (a) A linearly separable data set. (b) A nonlinearly separable data set. (c) Creation of a new category to create a linearly separable data set. [Pg.69]

Artificial neural networks (ANNs) are good at classifying non-linearly separable data. There are at least 30 different types of ANNs, including multilayer perceptron, radial basis functions, self-organizing maps, adaptive resonance theory networks and time-delay neural netwoiks. Indeed, the majority of ATI applications discussed later employ ANNs - most commonly, MLP (multilayer perceptron), RBF (radial basis function) or SOM (self-organizing map). A detailed treatise of neural networks for ATI is beyond the scope of this chapter and the reader is referred to the excellent introduction to ANNs in Haykin (1994) and neural networks applied to pattern recognition in Looney (1997) and Bishop (2(X)0). Classifiers for practical ATI systems are also described in other chapters of this volume. [Pg.90]

Furthermore, the pattern structures in a representation space formed from raw input data are not necessarily linearly separable. A central issue, then, is feature extraction to transform the representation of observable features into some new representation in which the pattern classes are linearly separable. Since many practical problems are not linearly separable (Minsky and Papert, 1969), use of linear discriminant methods is especially dependent on feature extraction. [Pg.51]

The KNN method has several advantages aside from its relative simplicity. It can be used in cases where few calibration data are available, and can even be used if only a single calibration sample is available for some classes. In addition, it does not assume that the classes are separated by linear partitions in the space. As a result, it can be rather effective at handling highly non-linear separation structures. [Pg.290]

SVM s are an outgrowth of kernel methods. In such methods, the data is transformed with a kernel equation (such as a radial basis function) and it is in this mathematical space that the model is built. Care is taken in the constmction of the kernel that it has a sufficiently high dimensionality that the data become linearly separable within it. A critical subset of transformed data points, the support vectors , are then used to specify a hyperplane called a large-margin discriminator that effectively serves as a hnear model within this non-hnear space. An introductory exploration of SVM s is provided by Cristianini and Shawe-Taylor and a thorough examination of their mathematical basis is presented by Scholkopf and Smola. ... [Pg.368]

When using a linear method, such as LDA, the underlying assumption is that the two classes are linearly separable. This, of course, is generally not true. If linear separability is not possible, then with enough samples, the more powerful quadratic discriminant analysis (QDA) works better, because it allows the hypersurface that separates the classes to be curved (quadratic). Unfortunately, the clinical reality of small-sized data sets denies us this choice. [Pg.105]

Feature Selection. Once a data set has been established to be linearly separable, then variance feature selection (50) can be used to discard the least useful descriptors and thereby focus on the more useful descriptors. Several routines are implemented... [Pg.119]

Figure 5 Contour plots of two groups of bivariate data with each group having identical variance-covariance matrices. Such groups are linearly separable... Figure 5 Contour plots of two groups of bivariate data with each group having identical variance-covariance matrices. Such groups are linearly separable...
Figure 12 A simple two-group, bivariate data set that is not linearly separable by a single function. The lines shown are the linear classifiersfrom the two units in the first layer of the multilayer system shown in Figure 13... Figure 12 A simple two-group, bivariate data set that is not linearly separable by a single function. The lines shown are the linear classifiersfrom the two units in the first layer of the multilayer system shown in Figure 13...
As illustrated in Figure 5, SVMs work by constructing a hyper-plane in a higher-dimensional feature space of the input data,91 and use the hyper-plane (represented as Hi and H2) to enforce a linear separation of input samples, which belong to different classes (represented as Class O and Class X). The samples that lie on boundaries of different classes are referred to as support vectors. The underlying principle behind SVM-based classification is to maximize the margin between the support vectors using kernel functions. In the case of... [Pg.580]

FIGURE 23.5 Schematic diagram showing the difference between linearly separable activity classes (a) and an embedded (non-linear) structure (b). For example, the active compounds (yellow) in (a) tend to have higher values of PCI and lower values of PC2 than the inactives (blue). While in (b), activity only occurs within a limited range of values of both PCI and PC2 and compounds outside this region are inactive. Data in (a) could be classified by LDA, while the data in (b) could be analyzed using SIMCA for example. [Pg.499]

In both the dual solution and decision function, only the inner product in the attribute space and the kernel function based on attributes appear, but not the elements of the very high dimensional feature space. The constraints in the dual solution imply that only the attributes closest to the hyperplane, the so-called SVs, are involved in the expressions for weights w. Data points that are not SVs have no influence and slight variations in them (for example caused by noise) will not affect the solution, provides a more quantitative leverage against noise in data that may prevent linear separation in feature space [42]. Imposing the requirement that the kernel satisfies Mercer s conditions (K(xj, must be positive semi-definite)... [Pg.68]

Linear SVM Classifiers. When the data set is linearly separable, the decision function f x) = y(x) to separate the classes is given by ... [Pg.315]

NonLinear SVM Classifiers. For nonlinear classification problems, the SVM basic idea is to project samples of the data set, initially defined in dimensional space, into another space 91 with a higher dimension (d < e), where samples then are separated by a linear separation (Fig. 13.16) (34). [Pg.316]

Nonlinear mixed effects models are similar to linear mixed effects models with the difference being that the function under consideration f(x, 0) is nonlinear in the model parameters 0. Population pharmacokinetics (PopPK) is the study of pharmacokinetics in the population of interest and instead of modeling data from each individual separately, data from all individuals are modeled simultaneously. To account for the different levels of variability (between-subject, within-subject, interoccasion, residual, etc.), nonlinear mixed effects models are used. For the remainder of the chapter, the term PopPK will be used synonymously with nonlinear mixed effects models, even though the latter covers a richer class of models and data types. Along with PopPK is population pharmacodynamics (PopPD), which is the study of a drug s effect in the population of interest. Often PopPK and PopPD are combined into a singular PopPK-PD analysis. [Pg.205]


See other pages where Linearly separable data is mentioned: [Pg.196]    [Pg.48]    [Pg.113]    [Pg.15]    [Pg.308]    [Pg.314]    [Pg.498]    [Pg.196]    [Pg.48]    [Pg.113]    [Pg.15]    [Pg.308]    [Pg.314]    [Pg.498]    [Pg.321]    [Pg.49]    [Pg.51]    [Pg.58]    [Pg.229]    [Pg.238]    [Pg.291]    [Pg.158]    [Pg.196]    [Pg.355]    [Pg.49]    [Pg.51]    [Pg.89]    [Pg.177]    [Pg.322]    [Pg.544]    [Pg.7]    [Pg.225]    [Pg.418]    [Pg.143]    [Pg.347]    [Pg.22]    [Pg.148]   
See also in sourсe #XX -- [ Pg.308 , Pg.314 ]




SEARCH



Linearizing data

Linearly separable

Separability linear

© 2024 chempedia.info