Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Supervised learning

The underlying learning process can follow different concepts. The two major learning strategies are unsupervised learning and supervised learning. [Pg.441]

Kohonen networks, also known as self-organizing maps (SOMs), belong to the large group of methods called artificial neural networks. Artificial neural networks (ANNs) are techniques which process information in a way that is motivated by the functionality of biological nervous systems. For a more detailed description see Section 9.5. [Pg.441]

In Part II of this book we have encountered three network architectures that require supervised learning perceptions, multilayer perceptions and radial basis function networks. Training for perceptions and multilayer perceptions is similar. The goal of [Pg.51]


Table 9-1 summarizes common methods for unsupervised and supervised learning. [Pg.442]

Figure 9-17. Outline of the procedure for supervised learning The output of the netw/ork is compared with the target value or vector, which yields the error The weights of the network are then adapted to reduce this error. Figure 9-17. Outline of the procedure for supervised learning The output of the netw/ork is compared with the target value or vector, which yields the error The weights of the network are then adapted to reduce this error.
A counter-propagation network is a method for supervised learning which can be used for prediction, It has a two-layer architecture where each netiron in the upper layer, the Kohonen layer, has a corresponding netiron in the lower layer, the output layer (sec Figure 9-21). A trained counter-propagation network can be used as a look-up tabic a neuron in one layer is used as a pointer to the other layer. [Pg.459]

A counter-propagation neural network is a method for supervised learning which can be used for predictions. [Pg.481]

Multiple linear regression is strictly a parametric supervised learning technique. A parametric technique is one which assumes that the variables conform to some distribution (often the Gaussian distribution) the properties of the distribution are assumed in the underlying statistical method. A non-parametric technique does not rely upon the assumption of any particular distribution. A supervised learning method is one which uses information about the dependent variable to derive the model. An unsupervised learning method does not. Thus cluster analysis, principal components analysis and factor analysis are all examples of unsupervised learning techniques. [Pg.719]

Discriminant emalysis is a supervised learning technique which uses classified dependent data. Here, the dependent data (y values) are not on a continuous scale but are divided into distinct classes. There are often just two classes (e.g. active/inactive soluble/not soluble yes/no), but more than two is also possible (e.g. high/medium/low 1/2/3/4). The simplest situation involves two variables and two classes, and the aim is to find a straight line that best separates the data into its classes (Figure 12.37). With more than two variables, the line becomes a hyperplane in the multidimensional variable space. Discriminant analysis is characterised by a discriminant function, which in the particular case of hnear discriminant analysis (the most popular variant) is written as a linear combination of the independent variables ... [Pg.719]

Supervised Learning. Supervised learning refers to a collection of techniques ia which a priori knowledge about the category membership of a set of samples is used to develop a classification rule. The purpose of the rule is usually to predict the category membership for new samples. Sometimes the objective is simply to test the classification hypothesis by evaluating the performance of the rule on the data set. [Pg.424]

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

The knowledge required to implement Bayes formula is daunting in that a priori as well as class conditional probabilities must be known. Some reduction in requirements can be accomplished by using joint probability distributions in place of the a priori and class conditional probabilities. Even with this simplification, few interpretation problems are so well posed that the information needed is available. It is possible to employ the Bayesian approach by estimating the unknown probabilities and probability density functions from exemplar patterns that are believed to be representative of the problem under investigation. This approach, however, implies supervised learning where the correct class label for each exemplar is known. The ability to perform data interpretation is determined by the quality of the estimates of the underlying probability distributions. [Pg.57]

Goodacre, R. Trew, S. Wrigley-Jones, C. Saunders, G. Neal, M. J. Porter,N. Kell, D. B. Rapid and quantitative analysis of metabolites in fermentor broths using pyrolysis mass spectrometry with supervised learning Application to the screening of Penicillium chrysogenum fermentations for the overproduction of penicillins. Anal. Chim. Acta 1995,313, 25 43. [Pg.340]

ANNs need supervised learning schemes and can so be applied for both classification and calibration. Because ANNs are nonlinear and model-free approaches, they are of special interest in calibration. [Pg.193]

The basis of classification is supervised learning where a set of known objects that belong unambiguously to certain classes are analyzed. From their features (analytical data) classification rules are obtained by means of relevant properties of the data like dispersion and correlation. [Pg.260]

Indeed, if the problem is simple enough that the connection weights can be found by a few moments work with pencil and paper, there are other computational tools that would be more appropriate than neural networks. It is in more complex problems, in which the relationships that exist between data points are unknown so that it is not possible to determine the connection weights by hand, that an ANN comes into its own. The ANN must then discover the connection weights for itself through a process of supervised learning. [Pg.21]

Genetic programming, a specific form of evolutionary computing, has recently been used for predicting oral bioavailability [23], The results show a slight improvement compared with the ORMUCS Yoshida-Topliss approach. This supervised learning method and other described methods demonstrate that at least qualitative (binned) predictions of oral bioavailability seem tractable directly from the structure. [Pg.452]

Next, supervised-learning pattern recognition methods were applied to the data set. The 111 bonds from these 28 molecules were classified as either breakable (36) or non-breakable (75), and a stepwise discriminant analysis showed that three variables, out of the six mentioned above, were particularly significant resonance effect, R, bond polarity, Qa, and bond dissociation energy, BDE. With these three variables 97.3% of the non-breakable bonds, and 86.1% of the breakable bonds could be correctly classified. This says that chemical reactivity as given by the ease of heterolysis of a bond is well defined in the space determined by just those three parameters. The same conclusion can be drawn from the results of a K-nearest neighbor analysis with k assuming any value between one and ten, 87 to 92% of the bonds could be correctly classified. [Pg.273]


See other pages where Supervised learning is mentioned: [Pg.441]    [Pg.441]    [Pg.442]    [Pg.455]    [Pg.455]    [Pg.462]    [Pg.481]    [Pg.418]    [Pg.326]    [Pg.350]    [Pg.5]    [Pg.541]    [Pg.555]    [Pg.734]    [Pg.688]    [Pg.207]    [Pg.61]    [Pg.114]    [Pg.15]    [Pg.199]    [Pg.9]    [Pg.9]    [Pg.373]    [Pg.47]    [Pg.118]    [Pg.265]    [Pg.194]   
See also in sourсe #XX -- [ Pg.462 ]

See also in sourсe #XX -- [ Pg.350 ]

See also in sourсe #XX -- [ Pg.688 ]

See also in sourсe #XX -- [ Pg.207 , Pg.652 ]

See also in sourсe #XX -- [ Pg.235 ]

See also in sourсe #XX -- [ Pg.373 ]

See also in sourсe #XX -- [ Pg.91 ]

See also in sourсe #XX -- [ Pg.54 ]

See also in sourсe #XX -- [ Pg.235 ]

See also in sourсe #XX -- [ Pg.311 , Pg.332 ]

See also in sourсe #XX -- [ Pg.2 , Pg.444 ]

See also in sourсe #XX -- [ Pg.11 ]

See also in sourсe #XX -- [ Pg.61 , Pg.62 , Pg.67 , Pg.70 , Pg.118 ]

See also in sourсe #XX -- [ Pg.262 ]

See also in sourсe #XX -- [ Pg.2 , Pg.3 , Pg.4 , Pg.348 , Pg.1097 , Pg.1521 , Pg.2792 ]

See also in sourсe #XX -- [ Pg.291 ]




SEARCH



Algorithms, supervised learning

CLASSIFICATION SUPERVISED LEARNING WITH HIGH-DIMENSIONAL BIOLOGICAL DATA

Feature selection supervised learning

Neural networks supervised learning

Nonclassification, Supervised Learning Problems

Structure Supervised learning

Supervised

Supervised learning artificial neural networks

Supervised learning classification trees

Supervised learning comparison

Supervised learning enhancement

Supervised learning linear discriminant analysis

Supervised learning methods

Supervised learning partitions

Supervised learning support vector machines

Supervised learning techniques

Supervised learning, definition

Supervised linear learning machine

Supervised statistical learning

© 2024 chempedia.info