Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Audio signals

The audio signal resulting from rf amplification and detection is amplified and detected in a phase sensitive manner by using the original modulation phase as a reference. Both mixer vacuum tubes 6S) and mechanical choppers are used. The resulting DC voltage is fed into a recorder. For sufficiently small audio modulation amplitudes the first derivative of the resonance absorption or dispersion results from the narrow-banding technique. [Pg.47]

Fig. 11. A powder pattern of Al NMR in a- AI2O1 at NMR frequency of 7.20 me./ second (/fg = 6490 gauss) the magnetic field increases from left to right with a total scan of about 1500 gauss. This is the dispersion mode audio signal at Hi = 0.5 gauss and is the absorption envelope except at certain points as described in Section II,C,2 109). Fig. 11. A powder pattern of Al NMR in a- AI2O1 at NMR frequency of 7.20 me./ second (/fg = 6490 gauss) the magnetic field increases from left to right with a total scan of about 1500 gauss. This is the dispersion mode audio signal at Hi = 0.5 gauss and is the absorption envelope except at certain points as described in Section II,C,2 109).
Figure 1.1 Overview of the basic philosophy used in the development of perceptual audio quality measurement techniques. A computer model of the subject is used to compare the output of the device under test (e.g. a speech codec or a music codec) with the ideal, using any audio signal. If the device under test must be transparent then the ideal is equal to the input. Figure 1.1 Overview of the basic philosophy used in the development of perceptual audio quality measurement techniques. A computer model of the subject is used to compare the output of the device under test (e.g. a speech codec or a music codec) with the ideal, using any audio signal. If the device under test must be transparent then the ideal is equal to the input.
Until now only time-frequency smearing of the audio signal by the ear, which leads to an excitation representation, has been described. This excitation representation is generally measured in dB SPL (Sound Pressure Level) as a function of time and frequency. For the frequency scale one does, in most cases, not use the linear Hz scale but the non-linear Bark scale. This Bark scale is a pitch scale representing the... [Pg.21]

After having applied the time-frequency smearing operation one gets an excitation pattern representation of the audio signal in (dl 1 exc, seconds, Bark). This representation is then transformed to an internal representation using a non-linear compression function. The form of this compression function can be derived from loudness experiments. [Pg.23]

The internal representation of any audio signal can now be calculated by using the transformations given in the previous section. The quality of an audio device can thus be measured with test signals (sinusoids, sweeps, noise etc) as well as real life signals (speech, music). Thus the method is universally applicable. In general audio devices are tested for transparency (i.e. the output must resemble the input as closely as possible) in which case the input and output are both mapped onto their internal representations and the quality of the audio device is determined by the difference between these input (the reference) and output internal representations. [Pg.26]

Spectro-temporal weighting. Some spectra-temporal regions in the audio signal carry more information, and may therefore be more important, than others. For instance one expects that silent intervals in speech carry no information are therefore less important. [Pg.28]

The term Perceptual Entropy (PE, see [Johnston, 1988]) is used to define the lowest data rate which is needed to encode some audio signal without any perceptual difference to the original. An estimate of the PE (there is not enough theory yet to calculate a real PE) can be used to determine how easy or how difficult it is to encode a given music item using a perceptual coder. [Pg.41]

Any procedure which is designed to remove localized defects in audio signals must take account of the typical characteristics of these artifacts. Some important features which are common to many click- degraded audio media include ... [Pg.86]

Figure 4.13 Restored audio signal for figure 4.11 (different scale)... Figure 4.13 Restored audio signal for figure 4.11 (different scale)...
In practice, it is important to emphasize that the various procedures that have been proposed for updating the estimate of the noise characteristics in the context of speech enhancement [Boll, 1979, McAulay and Malpass, 1980, Sondhi et ah, 1981, Erwood and Xydeas, 1990] are usually not applicable for audio signals they rely on the presence of signal pauses that are frequent in natural speech, but not necessarily in musical recordings. The development of noise tracking procedures that are suited for an application to audio signals thus necessitates a more precise knowledge of the noise characteristics in cases where it cannot be considered stationary. [Pg.104]

The general Volterra and NARMA models suffer from two problems from the point of view of distortion correction. They are unnecessarily complex and even after identifying the parameters of the model it is still necessary to recover the undistorted signal by some means. In section 4.2 it was noted that audio signals are well-represented by the autoregressive (AR) model defined by equation 4.1 ... [Pg.109]

In this section, the model in Equation (9.18) is used to develop an analysis/synthesis system which will serve to test the accuracy of the sine-wave representation for audio signals. In the analysis stage, the amplitudes, frequencies, and phases of the model are estimated, while in the synthesis stage these parameter estimates are first matched and then interpolated to allow for continuous evolution of the parameters on successive frames. This sine-wave analysis/synthesis system forms the basis for the remainder of the chapter. [Pg.192]

Godsill, 1997a] Godsill, S. J. (1997a). Bayesian enhancement of speech and audio signals which can be modelled as ARMA processes. International Statistical Review, 65(1) 1—21. [Pg.260]

Godsill and Rayner, 1992] Godsill, S. J. and Rayner, P. J. W. (1992). A Bayesian approach to the detection and correction of bursts of errors in audio signals. In Proc. IEEE Int. Conf. Acoust., Speech and Signal Proc, volume 2, pages 261-264, San Francisco, CA. [Pg.260]

Johnston, 1989b] Johnston, J. D. (1989b). Transform coding of audio signals using perceptual noise criteria. IEEE Journal on Selected Areas in Communications, 6 314-323. [Pg.264]

Krasner, 1979] Krasner, M. A. (1979). Digital encoding of speech and audio signals based on the perceptual requirements of the auditory system. Technical Report 535, Massachusetts Institute of Technology, Lincoln Laboratory, Lexington. [Pg.266]


See other pages where Audio signals is mentioned: [Pg.739]    [Pg.57]    [Pg.62]    [Pg.819]    [Pg.12]    [Pg.15]    [Pg.15]    [Pg.17]    [Pg.19]    [Pg.20]    [Pg.37]    [Pg.83]    [Pg.84]    [Pg.86]    [Pg.87]    [Pg.88]    [Pg.88]    [Pg.88]    [Pg.93]    [Pg.98]    [Pg.104]    [Pg.105]    [Pg.108]    [Pg.113]    [Pg.133]    [Pg.157]    [Pg.198]    [Pg.252]    [Pg.253]    [Pg.260]    [Pg.260]    [Pg.260]   
See also in sourсe #XX -- [ Pg.57 ]




SEARCH



Audio

© 2024 chempedia.info