The auditory system is a sound analyzer and our models of this analysis are based on physical systems. Frequency is the primary aspect of sound that is analyzed by the auditory system. Consider listening to several notes played together on a piano. Often, it is relatively easy to determine the different pitches. However, the sound that arrives at the auditory system is one complex waveform that is a sum of all the frequencies that make up this composite sound (recall the role of Fourier's theorem). The primary frequencies of this complex sound are those corresponding to the basic vibration of each piano string that is struck when the various piano keys were pressed. These primary frequencies produce the perceived pitches of the various notes. The fact that we perceive these pitches means that the auditory system has determined the various frequency components that made up the complex waveform that arrived at the auditory system as a single sound.
When the waveform is described in terms of the pressure or displacement waveform (Fig. 1), it is being described in the time domain. If it is described in terms of the frequency components that constituent the waveform, it is being described in the frequency domain. Figure 2a shows a complex waveform in the time domain. This waveform is the sum of three sinusoids. The amplitude and phases of these three sinusoids are shown in the amplitude and phase spectra as a function of frequency. These spectra form the frequency domain description of the complex wave.
The most general model for frequency analysis by the auditory system is filtering. Although there are not actual physiological filters in the auditory system, the results of the biomechanical and neural actions of the auditory periphery behave as if there were. The important filter for the purposes of auditory analysis is the bandpass filter, in which all frequency components of a complex sound that are within the passband of the filter are passed unaltered and those components with frequencies higher and lower than the passband have their levels attenuated and their starting phases altered (actually, the components outside the passband are delayed, resulting in a phase shift). The levels will decrease as a constant roll-off ratio in dB/octave changes in frequency from the cutoff frequencies of the passband (an octave is a doubling of frequency).
Bandpass filters can be used to estimate the amplitude spectrum of a complex waveform. If the waveform contains frequency components that are in the
in the input. For instance, if the input consists of the sum of two frequency components, f1 and f2, the nonlinear output contains frequencies equal to mf1 7 nf2, where n = m = 1, 2, 3, 4, etc. The frequency components that are integers off1 andf2 (nf1 and mf2) are harmonics, and if they are audible they are called aural harmonics. The terms mf1 + nf2 are summation tones, and the terms mf1 — nf2, are difference tones. The cubic difference tone, 2f1 — f2 (m = 1, n = 2), is a significant nonlinear auditory component. A nonlinear processor will distort its input due to the addition of the nonlinear frequency components to the originating input components.
Was this article helpful?