Okay, this
node is about
auditory localization in the sense of "
how do we tell where a sound is coming from?", not in the sense of "
where in the brain is auditory processing localized?" That being cleared up, let's begin...
Okay, for those of you who haven't noticed, we live in
three-dimensional space. This means that in order to locate a
sound in space, whe have to be able to locate it on two
planes. It doesn't really matter which two, so long as they are
perpendicular, but we use the
horizontal and
vertical planes, with respect to our head, probably because that's the simplest way to do it if you have
bilateral symmetry.
First,
horizontal localization. Okay, I'm not going to spend much time on this, because
exquisitor has already covered it pretty well at
why we have two ears. Go vote it up. Now. ... Alright, now that you're back, a couple of brief points. Not only is the
time delay between when a sound reaches one ear versus the other used in localization, but also the differences in
volume due to head and
torso shadowing effects play a role.
In
why we have two ears,
Truffle makes the (incorrect) point that people can't localize sound vertically. It is true that we can't do it by the above
mechanism. The above mechanism only tells us where a sound is coming from in the
horizontal plane. That leaves a full
circle around our body in the
vertical plane which remains unknown. Which could be
evolutionarily disadvantageous. Imagine a
lion is about to jump out of a tree onto your head and
gobble you up. (I don't give a fuck whether or not lions can climb trees, so hush, you.) Now, if you hear him, but can only localize horizontally, odds are you won't pinpoint him in time to run the hell away. You'll probably end up getting
et. To avoid the lion-tree problem, we use a pretty slick method for vertical sound localization. To explain this, I will first need to describe some ear
anatomy.
There are two parts of your
ear that play a role in vertical sound localization. First is the
pinna. Second, the
tragus. Useful names, huh? Okay, I'll help you find them. Look in a mirror. See those two big
fleshy lobes coming off of your head? Those are your ears. Look a bit closer at one of them. Okay, now, the big oval-y part is the
pinna. It makes up most of your ear. However, if you look closely, you'll notice that in front of the opening to your ear, there's a little triangular bump-flap type thing. Poke around til you find it. That's your
tragus. Get a good look at the two, and how they are situated with respect to each other.
Now, when sound enters your ear, there are a couple of routes it can take. It can just go straight down through the
ear canal to the
eardrum, or it can reflect off the pinna, then off the tragus, and into the ear. Now would be a good time for a bad
ascii diagram....
reflected
/
/ /
|/ /direct
|\ /
p| \ /
i| \ /
n| \/
n| /\ |
a| / \|tragus
/ / /|
/ / / |
/ / / |
/ / / /
/ / / /
Hope you can sort that out.
Okay, now, because of the way your pinna is shaped, that extra bit of distance that the
reflected sound wave travels changes, depending on where a sound is located in the vertical plane. The distance is shorter at lower
elevations, and longer at higher
elevations. But all you really need to remember is that it varies. Now, when (
natural) sounds enter your ears, they are generally composed of a range of
frequencies. When the reflected sound enters the ear with the direct sound, frequencies that have a
wavelength equal to half of the extra distance travelled by the reflected sound undergo
destructive interference. Sounds a bit complicated, I know. All this means is that there is a
dip in the percieved loudness of sounds. If you move your head around just right, and pay close attention, you can hear this change in frequency. It's most noticeable if you have a constant droning type of sound to listen to while you do it. Where the dip is located in the
frequency spectrum depends on the vertical location of the sound. The brain can recognize that dip, and processes it as a vertial location in space.
Pretty slick, eh?
Now that you're familiar with the sound cues used to localize sound, I'll go over the neural mechanisms involved, which are even cooler.I'll stick with horizontal localization, as the circuits for vertical localization are not well known, and even if they were would probably be difficult to describe here.
First, horizontal localization via time delays. The time delay method of localization is generally reffered to as ITD (interaural time delay). The neural circuit for computation of ITD looks something like this:
1 2 3 4 5 6 7 8
right ear input o==========================================
| | | | | | | |
| | | | | | | |
| | | | | | | |
* * * * * * * *
| | | | | | | |
| | | | | | | |
| | | | | | | |
==========================================o left ear input
8 7 6 5 4 3 2 1
The bold lines are the
axons of neurons which recieve input from the left and right ear. The neurons transmit information from a single
frequency only. There are circuits like this for many different frequencies, and they occur on both sides of the brain. The signals coming in on these
axons are
phase locked. This means that the
action potientials occur in a time course which is in phase with the sound frequency that they encode. While there is not a spike for every cycle of the signal, the spikes that do occur are always in phase with the frequency. This allows the circuit to function during the middle of a sound, not just at its
onset. The asterisks (*) represent neurons which act as
coincidence detectors. This means that they only fire when the recieve input from both the left and right neruons nearly
simultaneously To understand how the circuit works, imagine a
sound source which is closer to the left ear. The left ear signal will reach the circuit slightly before the right ear signal, because the sound reaches the left ear first. The action potential will the propogate through each branch of the neurons in the order indicated by the numbers. In this case the left ear signal is slightly ahead of the right ear signal. So, say that the left ear signal is at 4 when the right ear signal enters the circuit at one. So far, none of the central
neurons has fired yet. Now, when the left input is at its 5, the right input is at its 2. A few
milliseconds later, the right input is at its 3 and the left input is at its 6. These two processes feed to the same neuron, so it fires. This is interpreted by the brain as a location, an example of
place coding. So the *'s on the far left in this diagram fire to represent sounds on the left of the body, and those on the right fire to represent sounds on the right of the body. Simple but beautiful.
Now, the interaural level difference (ILD) circuits looks something like this:
to higher processing centers
|
|
^ | ^
| | |
| | |
| | |
| | |
{*) | (*}
/ \ | / \
/ \ | / \
/ \ | / \
/ \ | / \
O O | O O
left-side right-side | left-side right-side
(excitatory) (inhibitory) | (inhibitory) (excitatory)
|
Left Hemisphere | Right Hemisphere
So, in the above circuits, the * neurons recieve excitatory inputs from the ears on the same side of the body as the circuit (
ipsilateral) and inhibitory inputs from the opposite side of the body (
contralateral). Now, each of the * neurons, in
the theoretical absence of sound has some spontaneous activity. The excitatory input increases that activity, the inhibitory input decreases it. So, if a sound souce is located on the left side of the body again, the volume of the sound will be louder in the left ear, and neurons on the left side of the brain wich are involved with that sound will be more active. The left * neuron above will be dominated by the
excitatory input, and will be more active than 'normal', and the right * neuron will be dominated by the
inhibitory input, and be less active than 'normal'. For center inputs the inhibitory responses usually win out, and both
neurons are subdued. By comparing the levels in the two neurons, one can determine the horizontal location of the sound.
Back to
how your brain works.