Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vector quantization

Neuronal networks are nowadays predominantly applied in classification tasks. Here, three kind of networks are tested First the backpropagation network is used, due to the fact that it is the most robust and common network. The other two networks which are considered within this study have special adapted architectures for classification tasks. The Learning Vector Quantization (LVQ) Network consists of a neuronal structure that represents the LVQ learning strategy. The Fuzzy Adaptive Resonance Theory (Fuzzy-ART) network is a sophisticated network with a very complex structure but a high performance on classification tasks. Overviews on this extensive subject are given in [2] and [6]. [Pg.463]

Feehs, R. J., and Arce, G. R., Vector Quantization for Data Compression of Trend Recordings, Tech. Rep. 88-11-1, University of Delaware, Dept. Elect. Eng., Newark,... [Pg.268]

Gray, R. M., Vector quantization. IEEE ASSP Mag., April, pp. 4-29 (1984). [Pg.268]

S.J. Dixon and R.G. Brereton, Comparison of performance of five common classifiers represented as boundary methods Euclidean distance to centroids, linear discriminant analysis, quadratic discriminant analysis, learning vector quantization and support vector machines, as dependent on data structure, Chemom. Intell. Lab. Syst, 95, 1-17 (2009). [Pg.437]

Gray, 1C, Karnin, E. D. (1982) Multiple local minima in vector quantizers. IEEE Trans Inform Theor 28, 256-361. [Pg.89]

Three commonly used ANN methods for classification are the perceptron network, the probabilistic neural network, and the learning vector quantization (LVQ) networks. Details on these methods can be found in several references.57,58 Only an overview of them will be presented here. In all cases, one can use all available X-variables, a selected subset of X-variables, or a set of compressed variables (e.g. PCs from PCA) as inputs to the network. Like quantitative neural networks, the network parameters are estimated by applying a learning rule to a series of samples of known class, the details of which will not be discussed here. [Pg.296]

Vector quantization. In vector quantization, not the individual filter bank output samples are quantized, but n-tuples of values. This technique is used in most current speech and video coding techniques. Recently, vector quantization has been applied in a scheme called TWIN-VQ ([Iwakami et al., 1995]). This system has been proposed for MPEG-4 audio coding (see [MPEG, 1997b]). [Pg.333]

Some historically important artificial neural networks are Hopfield Networks, Per-ceptron Networks and Adaline Networks, while the most well-known are Backpropa-gation Artificial Neural Networks (BP-ANN), Kohonen Networks (K-ANN, or Self-Organizing Maps, SOM), Radial Basis Function Networks (RBFN), Probabilistic Neural Networks (PNN), Generalized Regression Neural Networks (GRNN), Learning Vector Quantization Networks (LVQ), and Adaptive Bidirectional Associative Memory (ABAM). [Pg.59]

Besides the classical Discriminant Analysis (DA) and the k-Nearest Neighbor (k-NN), other classification methods widely used in QSAR/QSPR studies are SIMCA, Linear Vector Quantization (LVQ), Partial Least Squares-Discriminant Analysis (PLS-DA), Classification and Regression Trees (CART), and Cluster Significance Analysis (CSA), specifically proposed for asymmetric classification in QSAR. [Pg.1253]

Fig. 2.1. Outline of the hybrid algorithm. The unstructured array of sensors is clustered using multi-dimensional scaling (MDS) with a mutual information (MI) based distance measure. Then Vector Quantization (VQ) is used to partition the sensor into correlated groups. Each such group provides input to one module of an associative memory layer. VQ is used again to provide each module unit with a specific receptive field, i.e. to become a feature detector. Finally, classification is done by means of BCPNN. Fig. 2.1. Outline of the hybrid algorithm. The unstructured array of sensors is clustered using multi-dimensional scaling (MDS) with a mutual information (MI) based distance measure. Then Vector Quantization (VQ) is used to partition the sensor into correlated groups. Each such group provides input to one module of an associative memory layer. VQ is used again to provide each module unit with a specific receptive field, i.e. to become a feature detector. Finally, classification is done by means of BCPNN.
The robustness to sensor drift of the method under study was evaluated using a simple synthetic drift model. A gain for each of the 60 sensors was initiated to 1 after which the gain factor was subject for over 100 random-walk steps taken from a Gaussian distribution with = 0.01. In the on-line learning condition while testing drift robustness, the last unsupervised vector quantization step was run continuously. [Pg.39]

We again perform vector quantization on each subset of sensors and form Q code vectors for each group. This gives us a total of P - Q units in the intermediate layer between the input and associative layer. Each code vector corresponds to a receptive field, an example of which is seen in Figure 2.3b, where we have backtracked the connections between a single code vector and the input sensors in a setting where 2=10. [Pg.40]

Ueda, N., Nakano, R. A new competitive learning approach based on an equidistortion principle for designing optimal vector quantizers. Neural Networks 7(8), 1211-1227 (1994) Young, F.W. Multidimensional scaling. Encyclopedia of Statistical Sciences 5, 649-659... [Pg.44]

Keywords Color Vector quantization Competitive learning Neural networks SaUency Binarization... [Pg.212]

This paper has shown the capabilities of MSCL algorithm for Color Quantization. MSCL is a neural competitive learning algorithm, which includes a magnitude function as a modulation factor of the distance used for the unit competition. As other competitive methods, MSCL accomplishes a vector quantization of the data. However, unlike most of the competitive methods who are oriented to represent in more detail only those zones with higher data-density, the magnitude function in MSCL can address the competitive process to represent any region. [Pg.230]

As future work we intend to use MSCL for vector quantization in multichannel satellite images. In this application, the number of colors is replaced by the number of channels, thus the dimension of the data increases considerably. By means of a magnitude map obtained from labelled image zones MSCL will allow to orient the vector quantification to the regions of interest such as specific crop areas. [Pg.230]

Ahalt, S., Krishnamurthy, A., Chen, P., Melton, D. Competitive learning algorithms for vector quantization. Neural Netw. 3(3), 277-290 (1990)... [Pg.230]

Martinetz, T., Berkovich, S., Schulten, K. Neural-gas network for vector quantization and its application to time-series prediction. IEEE Trans. Neural Netw. 4(4), 558-569 (1993)... [Pg.231]

Mohler, G., and Conkie, A. Parametric modeling of intonation using vector quantization. In Proceedings of the third ESCA/IEEE workshop on speech synthesis (1998), p. 311=314. [Pg.590]


See other pages where Vector quantization is mentioned: [Pg.464]    [Pg.464]    [Pg.540]    [Pg.215]    [Pg.160]    [Pg.540]    [Pg.469]    [Pg.548]    [Pg.7]    [Pg.12]    [Pg.13]    [Pg.14]    [Pg.119]    [Pg.92]    [Pg.111]    [Pg.176]    [Pg.178]    [Pg.122]    [Pg.15]    [Pg.200]    [Pg.540]    [Pg.36]    [Pg.37]    [Pg.40]    [Pg.212]    [Pg.230]    [Pg.27]    [Pg.30]   
See also in sourсe #XX -- [ Pg.91 ]




SEARCH



Learning vector quantization

Learning vector quantizer network

Quantization

Quantized

Second quantization formalism state vector

© 2024 chempedia.info