Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Learning unsupervised

In Section 8.5 it was shown that supervised learning methods can be used to build effective classification models, using calibration data in which the classes are known. However, if one has a set of data for which no class information is available, there are several methods that can be used to explore the data for natural clustering of samples. These methods can also be called unsupervised learning methods. A few of these methods are discussed below. [Pg.307]

Once the classification space and the distance measure are defined, one must also define a linkage rule to be used in the HCA algorithm. A linkage rule refers to the specific means by which the distance between different clusters is calculated. Some examples of these are provided below  [Pg.307]

Sample number Known class Groupings obtained from HCA (four clusters specified) [Pg.309]

1) Initialize the weights to small random numbers, usually between 0 and 1. The weight between input unit i and grid node j is wy. [Pg.62]

3) For a given input vector, compute the distance to each grid point, where distance is defined, for example, as the Euclidean distance (Equation 4.1). [Pg.62]

4) Find the grid point k with the smallest distance measure. [Pg.62]

5) Strengthen the connection between the input layer and the grid point with the minimum distance ( and all points in its neighborhood)  [Pg.62]

6) Repeat steps 2 through 5 for all the patterns in the input set until some stopping criterion is met, e.g., number of iterations, or 8 is less than some pre-specified small number for all input patterns. Note that T and the neighborhood size decrease as the number of training iterations increase. [Pg.62]


The underlying learning process can follow different concepts. The two major learning strategies are unsupervised learning and supervised learning. [Pg.441]

The Kohonen network i.s a neural network which uses an unsupervised learning strategy. Sec Section 9.5,3 for a more detailed description. [Pg.455]

The Kohonen network adapts its values only with respect to the input values and thus reflects the input data. This approach is unsupervised learning as the adaptation is done with respect merely to the data describing the individual objects. [Pg.458]

The main characteristics of the method, developed in our group for reaction classification arc 1) the representation of a reaction by physicochemical values calculated for the bonds being broken and made during the reaction, and 2 use of the unsupervised learning method of a self-organi2ing neural network for the perception of similarity of chemical reactions [3, 4],... [Pg.545]

Multiple linear regression is strictly a parametric supervised learning technique. A parametric technique is one which assumes that the variables conform to some distribution (often the Gaussian distribution) the properties of the distribution are assumed in the underlying statistical method. A non-parametric technique does not rely upon the assumption of any particular distribution. A supervised learning method is one which uses information about the dependent variable to derive the model. An unsupervised learning method does not. Thus cluster analysis, principal components analysis and factor analysis are all examples of unsupervised learning techniques. [Pg.719]

Fig. 3. Types of pattern recognition techniques (a) preprocessing, (b) display, (c) unsupervised learning, and (d) supervisediearning. Fig. 3. Types of pattern recognition techniques (a) preprocessing, (b) display, (c) unsupervised learning, and (d) supervisediearning.
In unsupervised learning, the outcome is usually a hypothesis to then be tested, often usiag classification or prediction methods. If the unsupervised learning process suggests the presence of distinct clusters, the hypothesis can be tested by applyiag a classification method to the data. A low number of misclassified samples would tend to reinforce the hypothesis. [Pg.424]

Unsupervised learning—In this type the network is able to discover statistical regularities in its input space and automatically develops different modes of behavior to represent different types of inputs. [Pg.5]

As described in the Introduction to this volume (Chapter 28), neural networks can be used to carry out certain tasks of supervised or unsupervised learning. In particular, Kohonen mapping is related to clustering. It will be explained in more detail in Chapter 44. [Pg.82]

In the following sections we propose typical methods of unsupervised learning and pattern recognition, the aim of which is to detect patterns in chemical, physicochemical and biological data, rather than to make predictions of biological activity. These inductive methods are useful in generating hypotheses and models which are to be verified (or falsified) by statistical inference. Cluster analysis has... [Pg.397]

R. Goodacre, J. Pygall and D.B. Kell, Plant seed classification using pyrolysis mass spectrometry with unsupervised learning the application of auto-associative and Kohonen artificial neural networks. Chemom. Intell. Lab. Syst., 33 (1996) 69-83. [Pg.698]

Inhomogeneities in data can be studied by cluster analysis. By means of cluster analysis both structures of objects and variables can be found without any pre-information on type and number of groupings (unsupervised learning, unsupervised pattern recognition). [Pg.256]

Principal component analysis (PCA) can be considered as the mother of all methods in multivariate data analysis. The aim of PCA is dimension reduction and PCA is the most frequently applied method for computing linear latent variables (components). PCA can be seen as a method to compute a new coordinate system formed by the latent variables, which is orthogonal, and where only the most informative dimensions are used. Latent variables from PCA optimally represent the distances between the objects in the high-dimensional variable space—remember, the distance of objects is considered as an inverse similarity of the objects. PCA considers all variables and accommodates the total data structure it is a method for exploratory data analysis (unsupervised learning) and can be applied to practical any A-matrix no y-data (properties) are considered and therefore not necessary. [Pg.73]

Two generally different scenarios can be found for applications of machine learning technology so-called supervised and unsupervised learning. The difference is the presence or absence of observation of the desired output on a training data set. [Pg.74]

Methods for unsupervised learning invariably aim at compression or the extraction of information present in the data. Most prominent in this field are clustering methods [140], self-organizing networks [141], any type of dimension reduction (e.g., principal component analysis [142]), or the task of data compression itself. All of the above may be useful to interpret and potentially to visualize the data. [Pg.75]

A self-organizing Kohonen map of the total database of cleaved retrosynthetic fragments generated as the result of an unsupervised learning procedure (data not shown)indicates that the cleaved fragments occupy a wide area on the map, characterized... [Pg.298]

It can be shown that the unsupervised learning methodology based on Kohonen self-organizing maps algorithm can be effectively used for differentiation between various receptor-specific groups of GPCR ligands. The method is similar to that described in Section 12.2.6. [Pg.307]

The two pattern recognition techniques used In this work are among those usually used for unsupervised learning. The results will be examined for the clusters which arise from the analysis of the data. On the other hand, the number of classes and a rule for assigning compounds to each had already been determined by the requirements of the mixture analysis problem. One might suppose that a supervised approach would be more suitable. In our case, this Is not so because our aim Is not to develop a classifier. Instead, we wish to examine the data base of FTIR spectra and the metric to see If they are adequate to help solve a more difficult problem, that of analyzing complex mixtures by class. [Pg.161]


See other pages where Learning unsupervised is mentioned: [Pg.193]    [Pg.195]    [Pg.441]    [Pg.441]    [Pg.455]    [Pg.465]    [Pg.481]    [Pg.499]    [Pg.418]    [Pg.420]    [Pg.422]    [Pg.424]    [Pg.326]    [Pg.350]    [Pg.555]    [Pg.365]    [Pg.688]    [Pg.60]    [Pg.199]    [Pg.9]    [Pg.9]    [Pg.265]    [Pg.270]    [Pg.298]    [Pg.307]    [Pg.22]   
See also in sourсe #XX -- [ Pg.193 , Pg.441 , Pg.455 , Pg.499 , Pg.545 , Pg.609 ]

See also in sourсe #XX -- [ Pg.397 , Pg.652 ]

See also in sourсe #XX -- [ Pg.160 ]

See also in sourсe #XX -- [ Pg.257 ]

See also in sourсe #XX -- [ Pg.286 , Pg.307 ]

See also in sourсe #XX -- [ Pg.62 , Pg.91 ]

See also in sourсe #XX -- [ Pg.13 ]

See also in sourсe #XX -- [ Pg.311 ]

See also in sourсe #XX -- [ Pg.2 , Pg.444 ]

See also in sourсe #XX -- [ Pg.291 ]

See also in sourсe #XX -- [ Pg.60 , Pg.136 , Pg.137 , Pg.421 ]

See also in sourсe #XX -- [ Pg.11 , Pg.92 ]

See also in sourсe #XX -- [ Pg.62 , Pg.67 , Pg.68 , Pg.95 , Pg.100 , Pg.110 ]

See also in sourсe #XX -- [ Pg.263 ]

See also in sourсe #XX -- [ Pg.361 ]

See also in sourсe #XX -- [ Pg.2 , Pg.3 , Pg.4 , Pg.348 , Pg.1097 , Pg.1524 , Pg.2792 ]




SEARCH



CLUSTERING UNSUPERVISED LEARNING IN LARGE BIOLOGICAL DATA

Outlook Unsupervised learning and diversity considerations

Unsupervised

Unsupervised competitive Kohonen learning

Unsupervised competitive learning

Unsupervised learning methods

Unsupervised learning, definition

© 2024 chempedia.info