Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear classification

As an extension of perceptron-like networks MLF networks can be used for non-linear classification tasks. They can however also be used to model complex non-linear relationships between two related series of data, descriptor or independent variables (X matrix) and their associated predictor or dependent variables (Y matrix). Used as such they are an alternative for other numerical non-linear methods. Each row of the X-data table corresponds to an input or descriptor pattern. The corresponding row in the Y matrix is the associated desired output or solution pattern. A detailed description can be found in Refs. [9,10,12-18]. [Pg.662]

Two groups of objects can be separated by a decision surface (defined by a discriminant variable). Methods using a decision plane and thus a linear discriminant variable (corresponding to a linear latent variable as described in Section 2.6) are LDA, PLS, and LR (Section 5.2.3). Only if linear classification methods have an insufficient prediction performance, nonlinear methods should be applied, such as classification trees (CART, Section 5.4), SVMs (Section 5.6), or ANNs (Section 5.5). [Pg.261]

Support Vector Machines (SVMs) generate either linear or nonlinear classifiers depending on the so-called kernel [149]. The kernel is a matrix that performs a transformation of the data into an arbitrarily high-dimensional feature-space, where linear classification relates to nonlinear classifiers in the original space the input data lives in. SVMs are quite a recent Machine Learning method that received a lot of attention because of their superiority on a number of hard problems [150]. [Pg.75]

Because of multiple receptor actions, which occur at different concentrations, different neuroleptics have different action profiles. There are many classifications for neuroleptic drugs, the least useful of which is probably based on their chemical structure. Other classifications include linear classifications based on the propensity to cause EPS, or multidimensional ones such as the Liege star which combines information on three positive effects (anti-autistic, antiproductive, antipsychotic), and three negative (hypotensive, extrapyramidal, sedative). In a general way, the more sedative neuroleptics such as levomepromazine, used more to treat acute agitation states, cause more hypotension related to alpha blockade, whereas those that act best on delirium (productive states) such as haloperidol tend to cause more EPS. [Pg.678]

Hammett a, F, R, (18) and molar refractivity. The sum over substituted positions for each of these parameters was used for multiply-substituted compounds. A set of linear classification functions in the summation of R was found to be statistically significant at the 5% level, but the classification of these 22 compounds was only 73 percent correct, missing 3 of the active set. [Pg.183]

Using discriminant analysis, the following non-linear classification functions were generated in Sa space also allowing discrimination between active and inactive sets. However, the parabolic functions in So were no more statistically significant than the functions in SR and classification was still only 73 percent correct, missing two of the active set. (19)... [Pg.184]

As an approximation to the Bayes rule, the linear discriminant function provides the basis for the most common of the statistical classification schemes, but there has been much work devoted to the development of simpler linear classification rules. One such method, which has featured extensively in spectroscopic pattern recognition studies, is the perceptron algorithm. [Pg.148]

Fig. 1.2. Linear classification is easier in higher dimensional spaces. In 2D on the left it is impossible to find a linear subspace (a straight line) to divide grey and black dots. After a nonlinear projection into the higher-dimensional 3D space, it is easy to find such a linear subspace (a plane). Fig. 1.2. Linear classification is easier in higher dimensional spaces. In 2D on the left it is impossible to find a linear subspace (a straight line) to divide grey and black dots. After a nonlinear projection into the higher-dimensional 3D space, it is easy to find such a linear subspace (a plane).
Linear/Non-Linear separation boundaries Here our attention is focused on the mathematical form of the decision boundary. Typical non-linear classification techniques are based on ANN and SVM, specially devoted to apply for classification problems of non-linear nature. It is remarkable that CAIMAN method seems not to suffer of nonlinear class separability problems. [Pg.31]

FIGURE 27. Piecewise-linear classification. The two-modal class (+) is represented by two prototypes w and W. ... [Pg.56]

Linear classification problem Suppose that we are given a set of training data... [Pg.24]

FIGURE 2 Examples of (A) linear and (B) non-linear classification problems. [Pg.190]

The separation surface may be nonlinear in many classification problems, but support vector machines can be extended to handle nonlinear separation surfaces by using feature functions < )(x). The SVM extension to nonlinear datasets is based on mapping the input variables into a feature space of a higher dimension (a Hilbert space of finite or infinite dimension) and then performing a linear classification in that higher dimensional space. For example, consider the set of nonlinearly separable patterns in Figure 28, left. It is... [Pg.323]


See other pages where Linear classification is mentioned: [Pg.34]    [Pg.211]    [Pg.211]    [Pg.221]    [Pg.252]    [Pg.139]    [Pg.143]    [Pg.66]    [Pg.233]    [Pg.48]    [Pg.79]    [Pg.58]    [Pg.25]    [Pg.25]    [Pg.355]   


SEARCH



© 2024 chempedia.info