Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Learning vector quantization

Neuronal networks are nowadays predominantly applied in classification tasks. Here, three kind of networks are tested First the backpropagation network is used, due to the fact that it is the most robust and common network. The other two networks which are considered within this study have special adapted architectures for classification tasks. The Learning Vector Quantization (LVQ) Network consists of a neuronal structure that represents the LVQ learning strategy. The Fuzzy Adaptive Resonance Theory (Fuzzy-ART) network is a sophisticated network with a very complex structure but a high performance on classification tasks. Overviews on this extensive subject are given in [2] and [6]. [Pg.463]

S.J. Dixon and R.G. Brereton, Comparison of performance of five common classifiers represented as boundary methods Euclidean distance to centroids, linear discriminant analysis, quadratic discriminant analysis, learning vector quantization and support vector machines, as dependent on data structure, Chemom. Intell. Lab. Syst, 95, 1-17 (2009). [Pg.437]

Three commonly used ANN methods for classification are the perceptron network, the probabilistic neural network, and the learning vector quantization (LVQ) networks. Details on these methods can be found in several references.57,58 Only an overview of them will be presented here. In all cases, one can use all available X-variables, a selected subset of X-variables, or a set of compressed variables (e.g. PCs from PCA) as inputs to the network. Like quantitative neural networks, the network parameters are estimated by applying a learning rule to a series of samples of known class, the details of which will not be discussed here. [Pg.296]

Some historically important artificial neural networks are Hopfield Networks, Per-ceptron Networks and Adaline Networks, while the most well-known are Backpropa-gation Artificial Neural Networks (BP-ANN), Kohonen Networks (K-ANN, or Self-Organizing Maps, SOM), Radial Basis Function Networks (RBFN), Probabilistic Neural Networks (PNN), Generalized Regression Neural Networks (GRNN), Learning Vector Quantization Networks (LVQ), and Adaptive Bidirectional Associative Memory (ABAM). [Pg.59]

Learning vector quantization (LVQ) is a snpervised learning techniqne invented by Teuvo Kohonen (1988 1990). The LVQ network is the precnrsor of the selforganizing map NN. Both of them are based on the Kohonen layer, which is capable of sorting items into categories of similar objects with the aid of training samples, and are widely used for classification. [Pg.30]

Conference on Neural Networks (IJCNN2004), pp. 25-29. Budapest, Hungary. Kohonen, T, 1988. Learning vector quantization. Neural Networks, 1 (1), 303. [Pg.39]

Kohonen, T, 1990. Improved versions of learning vector quantization. Proceedings of the International Joint Conference on Neural Networks, Vols 1-3, pp. A545—A550. Washington, USA. [Pg.39]

Schneider, R, Biehl, M. and Hammer, B., 2009. Adaptive relevance matrices in learning vector quantization. Neural Computation, 21 (12), 3532-3561. [Pg.39]

Classification using the learning vector quantization network... [Pg.51]

Luo, X.C., Singh, C.A. and Patton, A.D., 2003. Power system reliabiUly evaluation using learning vector quantization and Monte Carlo simulation. Electric Power Systems Research, 66(2), 163-169. [Pg.53]

Finally, we briefly mention several other ANNs that have seen limited chemical applications. A connectionist hyperprism ANN has been used in the analysis of ion mobility spectra. This network shares characteristics of Kohonen and backpropagation networks. The DYSTAL network has been successfully used to classify orange juice as either adulterated or unadulte-rated.200 A learning vector quantizer (LVQ) network has been used to identify multiple analytes from optical sensor array data. A wavelet ANN has been applied to the inclusion of P-cyclodextrin with benzene derivatives, anj a... [Pg.100]

Learning Vector Quantization Network for PAPR Reduction in Orthogonal Frequency Division Multiplexing Systems... [Pg.106]

Keywords Orthogonal Frequency Division Multiplexing, Peak to Average Power Ratio, Learning Vector Quantization, Neural Network. [Pg.106]


See other pages where Learning vector quantization is mentioned: [Pg.464]    [Pg.464]    [Pg.540]    [Pg.540]    [Pg.469]    [Pg.119]    [Pg.92]    [Pg.111]    [Pg.176]    [Pg.178]    [Pg.122]    [Pg.540]    [Pg.27]    [Pg.30]    [Pg.31]    [Pg.41]    [Pg.42]    [Pg.43]    [Pg.43]    [Pg.51]    [Pg.52]    [Pg.54]    [Pg.106]    [Pg.108]    [Pg.112]    [Pg.113]    [Pg.128]    [Pg.237]    [Pg.104]    [Pg.64]    [Pg.86]    [Pg.91]    [Pg.364]    [Pg.106]   
See also in sourсe #XX -- [ Pg.92 ]

See also in sourсe #XX -- [ Pg.757 ]




SEARCH



Learning vector quantizer network

Quantization

Quantized

Vector quantization

© 2024 chempedia.info