Temporal Modulation

Sounds from most sources vary in amplitude (amplitude modulation) and/or frequency (frequency modulation) over time, and often the modulation of one sound source may differ from that of another sound source. Thus, in some cases the ability to process sound from a source must be resilient to modulations, and in other cases differences in modulation might help segregate one sound source from another. It appears as if frequency modulation per se is not a useful cue for sound source segregation, but amplitude modulation may be.

Much the work on auditory scene analysis has been done in terms of a paradigm referred to as auditory stream fusion or auditory stream segregation. A typical experiment may involve two tones of different frequencies f and f2) that are turned on and off (turning a sound on and off is a form of amplitude modulation). Suppose that a tone of frequency f is turned on and off out of phase with a tone of frequency f2. Thus, the frequencies alternate back and forth between f and f2. Under some conditions, listeners report that they perceive a single sound source whose pitch is alternating. In other conditions, they report perceiving two separate sound sources, each with its own pulsating pitch, as if there were two auditory streams consisting of the two sources running side by side. The conditions that lead to stream segregation (the perception of two sources) as opposed to stream fusion (one source) provide valuable information about the cues that may be used by the auditory system to sort the various sound sources in a complex auditory scene. Spectral differences are potent for stream segregation, but differences in spatial parameters (e.g., interaural time differences), modulation patterns, and timbre can also support auditory stream segregation.

Sounds that share a common pattern of amplitude modulation (AM) are most likely produced by a single sound source than by different sources. Thus, common AM is a potential cue for sound source determination. When a complex masker contains two spectrally separated masking stimuli (e.g., two narrow bands of noise in different regions of the spectrum), the detection of a signal whose frequency is at the spectral center of one of the maskers is dependent on the similarity of the pattern of AM imposed on the two masking stimuli. If both masking stimuli have the same AM pattern (the maskers are comodulated), then detection threshold is lower than if the AM patterns are not the same (maskers are not comodulated). The difference in detection threshold due to comodulation is called comodulation masking release (CMR), as shown in Fig. 14. Models of CMR often assume that comodulation aids the auditory system in determining the quiet periods in the masker when the signal would be easier to detect.

Another example of the role of common amplitude modulation concerns the detection of a change in the depth of AM (not unlike experiments measuring the TMTF). The ability to detect a change in the depth of AM of a tone with a particular carrier frequency (probe tone) is not changed if a second, unmodulated tone, of a different carrier frequency is added to the probe tone. Since tones of different frequencies do not interfere with each other in masking experiments, this is not a surprising outcome. However, if the two tones are modulated with the same pattern, detection of the depth of AM imposed on the probe tone is more difficult, as shown in Fig. 15. The elevation of the threshold for detecting a change in AM depth due to comodulation is referred to modulation detection interference (MDI). It is as if the common modulation fuses the two tones as being produced by a single source, making it more difficult to process the AM of either component of this single source. If so, making the modulation patterns different should no longer allow the two tones to be fused as if they were produced by a common source and the amount of MDI should decrease, which is what happens as shown on the right of Fig. 15.

Frequency Frequency

Figure 14 Both the basic CMR task and results are shown. (Bottom) The time domain waveforms for the narrowband maskers (target and cue bands) and the amplitude spectra for the maskers and the signal are shown in a schematic form. The dotted line above each time domain waveform depicts the amplitude modulated envelope of the narrowband noises. The listener is asked to detect a signal (S) which is always added to the target band. In the target band-alone condition, the signal is difficult to detect. When a cue band is added to the target band such that it is located in a different frequency region than the target band and has an amplitude envelope that is different (not comodulated with) from the target band, there is little change in threshold from the target band-alone condition. However, when the target and cue bands are comodulated, the threshold is lowered by approximately 12 dB, indicating that the comodulated condition makes it easier for the listener to detect the signal. The waveforms are not drawn to scale (from Yost, 2000).

Experiments in CMR and MDI demonstrate the ability of the auditory system to integrate information across a sound's spectrum and experiments in stream segregation describe temporal integration over time. Both spectral and temporal integration are requirements for analyzing complex auditory scenes in the real world.

Was this article helpful?

0 0
All About Alzheimers

All About Alzheimers

The comprehensive new ebook All About Alzheimers puts everything into perspective. Youll gain insight and awareness into the disease. Learn how to maintain the patients emotional health. Discover tactics you can use to deal with constant life changes. Find out how counselors can help, and when they should intervene. Learn safety precautions that can protect you, your family and your loved one. All About Alzheimers will truly empower you.

Get My Free Ebook


Post a comment