in three-dimensional space (azimuth or the left-right plane, vertical or the up-down plane, and distance or the near-far plane). It appears that the auditory system uses different acoustic cues to compute the source's location in each plane.

A sound arriving at a listener's ears from a source lying in the azimuth plane will reach one ear before it reaches the other ear, resulting in an interaural time difference that increases as the sound source is moved further away from midline toward one ear. The sound arriving at the far ear will be less intense than that arriving at the near ear because for a wide range of frequencies the head produces a sound shadow lowering the level at the distal ear. As with the interaural time difference, the interaural level difference also increases as the sound moves in azimuth toward one ear. Thus, interaural time and level differences are the cues used to locate sound sources in the azimuthal plane. At midline, sound sources separated by as little as 1° of visual angle can be discriminated. Interaural time differences as small as 10 microseconds and interaural level differences as small as 0.5 dB can be discriminated. Due to the interaction of sound with the dimensions of the head, the interaural differences are frequency dependent, such that sound sources producing sounds with low frequencies or slow amplitude modulations are located on the basis of interaural time differences, and sounds produced with high frequencies are located on the basis of interaural level differences (the duplex theory of sound localization).

If the head does not move, a sound directly in front of a listener will produce no interaural time and level differences, as would all sounds lying on a plane that runs from in front to directly overhead, directly behind, and directly below (this vertical, midsagital plane is one cone of confusion in which all locations on a cone of confusion produce the same interaural differences). If interaural differences were the only cues used for sound localization, then all sound sources located on a cone of confusion would be perceived at the same location. Although differentiating sound sources located on cones of confusion is more difficult when the head is kept stationary than when it is not, sound sources located at different points on a cone of confusion are perceived at different spatial locations. Thus, cues in addition to interaural differences appear necessary for determining vertical location.

The HRTF described previously occurs because the spectrum of a sound traveling from a source to the tympanic membrane is altered by the body structures (especially the pinna) that the sound must pass over. The spectral alteration and, hence, the spectral char acteristics of the HRTF depend on the location of the sound source relative to that of the body and head. The HRTF therefore contains potentially useful information about sound source location. The spectral alterations resulting in the HRTF, especially in high-frequency regions, provide cues for vertical sound localization. In particular, there are HRTF spectral valleys and peaks in regions above 3000 Hz that vary systematically as a function of vertical location. These high-frequency HRTF spectral valleys and peaks are the potential cues for vertical sound location.

Sound location acuity is best in the azimuth plane, poorer in the vertical direction, and even poorer for judgments of distance. The distance of a source can be inferred from loudness differences, assuming one compensates for the other variables that cause a sound's level to change. Distance location is also dependent on the ratio of reflected sound (in reverberant environments) to the amount of sound coming directly from the source. The more relative reflected sound there is, the closer the sound is likely to be.

Most animals locate sounds in reverberant spaces very well, suggesting that the source of the actual sound is not often confused with the location from which reflections (echoes) occur. The sound from the source will reach the listener before that from any reflection because the path of the reflected sound is longer. Thus, our ability to locate sound sources accurately in reverberant environments is most likely due to the earlier arriving sound from the source taking precedent over later arriving reflected sound. Indeed, studies of the precedence effect (or law of the first wavefront) suggest that later arriving reflected sound is suppressed relative to earlier arriving direct sound.

Sounds presented binaurally over headphones are often lateralized within the head as opposed to being localized in the external environment. However, if careful attention is paid to altering the stimuli to reflect the spectral complexity of the HRTF before presenting sounds over headphones, then listeners can perceive virtual sounds in the external environment at locations appropriate for the HRTF-altered stimuli delivered to both headphones.

Segregating different sound sources might be aided by our sound-localization abilities. The ability to determine sound sources based on spatial separation has been referred to as the ''cocktail party effect,'' referring to our ability to attend to one voice at a cocktail party out of many voices and other competing sounds sources. The threshold for detecting a masked signal presented with one set of interaural differences of time and level is lower if the masker has a different set of interaural differences than if the signal and masker share the same interaural differences. Since interaural differences are used to determine a sound's azimuthal location, the stimulus conditions described previously can be restated: The signal is at one azimuthal position and the masker at another. Thus, the signal is easier to detect if it is at a different location than the masker. The difference in detection thresholds due to interaural differences of time and level between the signal and masker are referred to as binaural masking-level differences (BMLDs or MLDS). MLDs suggest that spatial separation can serve as an aid for sound source determination. However, spatial separation by itself is not a necessary and sufficient condition for sound source determination. The sound of an orchestra recorded by a single microphone played over a single headphone will provide no interaural differences, and yet such a condition interferes little with one's ability to identify the musical instruments (sound sources) in the orchestra.

Confidence and Social Supremacy

Confidence and Social Supremacy

Surefire Ways To Build Up Your Confidence As Well As Be A Great Networker. This Book Is One Of The Most Valuable Resources In The World When It Comes To Getting Serious Results In Building Confidence.

Get My Free Ebook

Post a comment