Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Binary Encoded Patterns

If alL features are binary encoded (x = 0 or 1) some simplifications and specialities exist. One possible feature selection method determines those features which have maximum variance among the a posteriori probabilities as calculated by the Bayes rule C170, 171, 3533. [Pg.110]

The a posteriori probability p(mlx.=1) of a particular pattern belonging to class m under the condition that feature i has the value 1 is given by equation (101). [Pg.110]

In a multi category classification problem with m classes, a set [Pg.110]

Lowry and Isenhour C170, 1713 expanded this method by introducing a cost factor for each feature. The cost factor of feature i is proportional to the number of times a 1 appears in feature i. The weight of feature i is calculated as the product of the corresponding cost factor [Pg.110]

Another approach C2443 to feature selection uses information theory. (A short introduction to the information theory is given in Chapter 11.6.1). The average amount of information on a feature to distinguish patterns of class 1 from patterns of class 2 is the mutual information [Pg.110]


The normalized distance is independent of the number of dimensions C1753 the Hamming and Tanimoto distances are suitable for binary encoded patterns C353, 356, 3573. [Pg.64]

X.. .. X., equivalent to a binary encoded pattern vector). A associ-1 d... [Pg.72]

The patterns used with this method are binary encoded patterns x. Groups Cn-tuples) of n (usually 3 or 4) pattern components are randomly chosen and associated with a "memory element". A memory element has 2 addressable 1-bit-storage locations. The configuration of the bit pattern of the n-tuple is used to address one of the 2 locations. Example For n = 4 a four-bit binary pattern is interpreted as one of the decimal numbers 0 to 15. In the training stage a 1 is written into the location addressed by the n-tuple. [Pg.74]

The method of binary template matching is equivalent to a learning network with n = 1. A binary template of a class is the superposition (Logical "and -function) of all binary encoded patterns of that class. [Pg.75]

The digital learning network is a simple and fast method for the classification of binary encoded patterns. The method suffers especially from the fact that a larger data set gives less satisfactory results. [Pg.77]

This classification method was first applied to chemical problems by Franzen and Hillig C86, 87, 108D. Although many simplifications have been introduced into this maximum likelihood method a considerable computational effort is necessary for the training and application of such parametric classifiers. However, the effort is much smaller if binary encoded patterns are used (Chapter 5.4). [Pg.82]

Bayes- and Maximum Likelihood Classifiers for Binary Encoded Patterns... [Pg.83]

In a binary encoded pattern x each component has the discrete value of 0 or 1. The d-dimensional probability density of a class m is only defined by d probabilities p(x m) with i = 1, 2,. .. d. [Pg.83]

The logarithm of (86) defines a Bayes classifier for binary encoded pattern x. The discriminant function B is linear and distinguishes between two classes (equation (87)). [Pg.84]

Difficulties arising in applications of feature selection methods to binary encoded patterns have been discussed by Niyashita et.al. C2133. [Pg.111]

A feature selection method - called attribute inclusion algorithm C3653, ("attribute" is equivalent to feature) - for binary encoded patterns was applied to chemical problems by Schechter and Jurs C118, 2603. [Pg.111]

Crawford and Morrison C603 found for a sophisticated mass spectral interpretation program a capability of the same order as that for an undergraduate student- A similar result has been reported C1193 about the interpretation of binary encoded infrared spectra. Kowalski et. al. C162 l emphasized the superiority of pattern recognition methods in the interpretation of multidimensional data. [Pg.140]

Hierarchical clustering was applied to find out those wavelengths in binary encoded infrared spectra with the smallest correlations. The result was utilized for a library search C104D- Methods similar to pattern recognition have been used for the detection of atmospheric constituents by a CO2 laser C214, 2153. [Pg.161]

C. Encoding converts binary information into patterns of magnetic flux on a hard disk s surface. This is how the data is written to the surface. [Pg.191]


See other pages where Binary Encoded Patterns is mentioned: [Pg.84]    [Pg.110]    [Pg.84]    [Pg.110]    [Pg.105]    [Pg.147]    [Pg.162]    [Pg.337]    [Pg.236]    [Pg.516]    [Pg.136]    [Pg.234]    [Pg.644]    [Pg.233]    [Pg.55]    [Pg.144]    [Pg.126]    [Pg.4017]    [Pg.159]    [Pg.828]    [Pg.231]    [Pg.286]    [Pg.75]    [Pg.755]    [Pg.762]    [Pg.691]    [Pg.332]    [Pg.968]    [Pg.104]    [Pg.87]    [Pg.125]    [Pg.125]    [Pg.58]    [Pg.318]   


SEARCH



Bayes- and Maximum Likelihood Classifiers for Binary Encoded Patterns

ENCODE

Encoded

Encoding

Encoding binary

Feature binary encoded patterns

© 2024 chempedia.info