Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Learning vector quantizer network

Some historically important artificial neural networks are Hopfield Networks, Per-ceptron Networks and Adaline Networks, while the most well-known are Backpropa-gation Artificial Neural Networks (BP-ANN), Kohonen Networks (K-ANN, or Self-Organizing Maps, SOM), Radial Basis Function Networks (RBFN), Probabilistic Neural Networks (PNN), Generalized Regression Neural Networks (GRNN), Learning Vector Quantization Networks (LVQ), and Adaptive Bidirectional Associative Memory (ABAM). [Pg.59]

Classification using the learning vector quantization network... [Pg.51]

Learning Vector Quantization Network for PAPR Reduction in Orthogonal Frequency Division Multiplexing Systems... [Pg.106]

Neuronal networks are nowadays predominantly applied in classification tasks. Here, three kind of networks are tested First the backpropagation network is used, due to the fact that it is the most robust and common network. The other two networks which are considered within this study have special adapted architectures for classification tasks. The Learning Vector Quantization (LVQ) Network consists of a neuronal structure that represents the LVQ learning strategy. The Fuzzy Adaptive Resonance Theory (Fuzzy-ART) network is a sophisticated network with a very complex structure but a high performance on classification tasks. Overviews on this extensive subject are given in [2] and [6]. [Pg.463]

Three commonly used ANN methods for classification are the perceptron network, the probabilistic neural network, and the learning vector quantization (LVQ) networks. Details on these methods can be found in several references.57,58 Only an overview of them will be presented here. In all cases, one can use all available X-variables, a selected subset of X-variables, or a set of compressed variables (e.g. PCs from PCA) as inputs to the network. Like quantitative neural networks, the network parameters are estimated by applying a learning rule to a series of samples of known class, the details of which will not be discussed here. [Pg.296]

Learning vector quantization (LVQ) is a snpervised learning techniqne invented by Teuvo Kohonen (1988 1990). The LVQ network is the precnrsor of the selforganizing map NN. Both of them are based on the Kohonen layer, which is capable of sorting items into categories of similar objects with the aid of training samples, and are widely used for classification. [Pg.30]

Conference on Neural Networks (IJCNN2004), pp. 25-29. Budapest, Hungary. Kohonen, T, 1988. Learning vector quantization. Neural Networks, 1 (1), 303. [Pg.39]

Kohonen, T, 1990. Improved versions of learning vector quantization. Proceedings of the International Joint Conference on Neural Networks, Vols 1-3, pp. A545—A550. Washington, USA. [Pg.39]

Finally, we briefly mention several other ANNs that have seen limited chemical applications. A connectionist hyperprism ANN has been used in the analysis of ion mobility spectra. This network shares characteristics of Kohonen and backpropagation networks. The DYSTAL network has been successfully used to classify orange juice as either adulterated or unadulte-rated.200 A learning vector quantizer (LVQ) network has been used to identify multiple analytes from optical sensor array data. A wavelet ANN has been applied to the inclusion of P-cyclodextrin with benzene derivatives, anj a... [Pg.100]

Keywords Orthogonal Frequency Division Multiplexing, Peak to Average Power Ratio, Learning Vector Quantization, Neural Network. [Pg.106]

Usefulness of any proposed scheme is that it reduces the PAPR and is not computationally complex. In our work we modified an existing promising reduction technique, SLM by using Learning Vector Quantization (LVQ) network. In a SLM system, an OFDM symbol is mapped to a set of quasi-independent equivalent symbols and then the lowest-PAPR symbol is selected for transmission. The tradeoff for PAPR reduction in SLM is computational complexity as each mapping requires an additional Inverse Fast Fourier transform (IFFT) operation in the transmitter. By the Learning Vector Quantization based SLM (LVQ-SLM), we eliminate the additional IFFT operations, getting a very efficient PAPR reduction technique with reduced computational complexity. [Pg.107]

Flotzinger, D., Kalcher, J., PfurtscheUer, G. Suitabibty of Learning Vector Quantization for on-line learning. In A case study of EEG classification Proc WCNN 93 World Congress on Neural Networks, vol. I, pp. 224-227. Lawrence Erlbaum, Hillsdale, NJ... [Pg.114]

Ueda, N., Nakano, R. A new competitive learning approach based on an equidistortion principle for designing optimal vector quantizers. Neural Networks 7(8), 1211-1227 (1994) Young, F.W. Multidimensional scaling. Encyclopedia of Statistical Sciences 5, 649-659... [Pg.44]

Keywords Color Vector quantization Competitive learning Neural networks SaUency Binarization... [Pg.212]


See other pages where Learning vector quantizer network is mentioned: [Pg.540]    [Pg.540]    [Pg.122]    [Pg.540]    [Pg.27]    [Pg.30]    [Pg.43]    [Pg.43]    [Pg.86]    [Pg.540]    [Pg.540]    [Pg.122]    [Pg.540]    [Pg.27]    [Pg.30]    [Pg.43]    [Pg.43]    [Pg.86]    [Pg.464]    [Pg.119]    [Pg.92]    [Pg.111]    [Pg.31]    [Pg.41]    [Pg.42]    [Pg.51]    [Pg.52]    [Pg.106]    [Pg.108]    [Pg.112]    [Pg.113]    [Pg.128]    [Pg.104]    [Pg.64]    [Pg.91]    [Pg.106]    [Pg.757]    [Pg.180]    [Pg.160]   
See also in sourсe #XX -- [ Pg.64 ]




SEARCH



Learning vector quantization

Quantization

Quantized

Vector quantization

© 2024 chempedia.info