Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Speech recognition

The telephone is also utilized in the development of interactive voice response (IVR) systems that support touch-tone or speech recognition responses. IVR systems have been developed for subject randomization, drug assignment, and survey data collection. [Pg.601]

H. Sato, T. Takeuchi, and K. Sakai. Temporal cortex activation during speech recognition an optical topography study. Cognition, 40 548-560, 1999. [Pg.370]

The performance of a head-mounted two-microphone adaptive noise-cancellation system was investigated by Weiss [Weiss, 1987] and Schwander and Levitt [Schwander and Levitt, 1987]. In this system, an omnidirectional microphone was used for the speech signal and a rear-facing hypercardioid microphone mounted directly above the speech microphone was used for the noise reference. In a room having a reverberation time of 0.4 sec, this system improved the speech recognition score to 74 percent from 34 percent correct for the unprocessed condition for normal-hearing listeners given... [Pg.150]

Schwander and Levitt, 1987] Schwander, T. and Levitt, H. (1987). Effect of two-microphone noise reduction on speech recognition by normal-hearing listeners. J. Rehab. Res. andDevel., 24 87-92. [Pg.277]

Ghitza, 1994] Ghitza, O. (1994). Auditory model and human man performance in tasks related to speech coding and speech recognition. IEEE Tram, on Speech and Audio Processing, 2 115-132. [Pg.544]

Nocerino et al., 1985] Nocerino, N., Soong, F. K., Rabiner, L. R., and Klatt, D. H. (1985). Comparative study of several distortion measures for speech recognition. Speech Communication, 4 317-331. [Pg.557]

Peripherals for user input/output card readers (obsolete), card punches (obsolete), line printers (almost obsolete), laser printers, terminals, also called CRT (cathode-ray tubes), speech recognition devices, speech synthesizers, optical scanners, modems (modulators— demodulators, to piggyback digital data onto an acoustical carrier for telephone transmission), IR laser ports, and so on. [Pg.552]

Rabincr LR. A Tutorial on Hidden Markov-Modcls and Selected Applications in Speech Recognition. Proc IEEE 1989 77(2) 257-86. [Pg.29]

A small example will illuminate this setting. Users that participate in a collaborative session in an immersive VE are usually tracked and use mouselike input devices that provide six degrees of freedom and additional buttons for specific commands that can be used to interact with the system. As stated above, the collaborative part of this setting is that more than one user interact in the same shared virtual world. That means that all users have access or at least the possibility to interact with the presented objects at the same time, usually with a mouse-like device, gestures or speech recognition. E.g., the system detects the push of a button on the 3D-mouse over a virtual object and utters an event to the application that has then the possibility to interpret the push of the button over the object as a try to select that object for further manipulation, e.g., dragging around. In a different implementation, the user application does not see the push of the button as such, but is presented an event that indicates the selection of an object within the world and can react on that. [Pg.292]

XD Huang, Y Ariki, and MA Jack. Hidden Markov Models for Speech Recognition. Edinburgh University Press, Edinburgh, UK, 1990. [Pg.285]

L Rabiner and BH Juang. Fundamentals of Speech Recognition. Prentice-Hall, Englewood Cliffs, NJ, 1993. [Pg.295]

LR Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77 257-286, 1989. [Pg.295]

Rabiner L., Juang B. (1993) Fundamentals of speech recognition. Prentice Hall, New Jersey. [Pg.128]

A different approach has been applied for the classification of flame images that provides information on the probability of coincidence with each of the combustion states previously known. This procedure is inspired on the cepstral analysis techniques, commonly used for speech recognition [37]. Although sound records are of a type different from image data, both can be equally transformed into covariance matrices, as required in this kind of method. [Pg.347]

The patient must have adequate hand and finger control or VOICE control to manipulate the system controls. Future systems may circumvent the need for finger control via speech recognition to allow patients lacking hand/finger control to use the system. [Pg.490]

Automatic speech-recognition technology may soon be capable of translating ordinary spoken discourse accurately into visually displayed text, at least in quiet environments this may eventually be a major boon for the profoundly hearing impaired. Presently, such systems must be carefully trained on individual speakers and/or must have a limited vocabulary [Ramesh et al, 1992]. Because there is much commercial interest in speech command of computers and vehicle subsystems, this field is advancing rapidly. [Pg.1177]


See other pages where Speech recognition is mentioned: [Pg.350]    [Pg.1]    [Pg.552]    [Pg.167]    [Pg.271]    [Pg.288]    [Pg.259]    [Pg.33]    [Pg.53]    [Pg.143]    [Pg.147]    [Pg.431]    [Pg.433]    [Pg.211]    [Pg.221]    [Pg.246]    [Pg.139]    [Pg.336]    [Pg.65]    [Pg.67]    [Pg.70]    [Pg.301]    [Pg.317]    [Pg.338]    [Pg.536]    [Pg.107]    [Pg.929]    [Pg.1906]    [Pg.66]    [Pg.1177]    [Pg.128]    [Pg.128]   
See also in sourсe #XX -- [ Pg.22 ]

See also in sourсe #XX -- [ Pg.22 ]

See also in sourсe #XX -- [ Pg.263 ]




SEARCH



Speech

© 2024 chempedia.info