An NJIT engineer proposes a new model for the way humans localize sounds

November 05, 2019

One of the enduring puzzles of hearing loss is the decline in a person's ability to determine where a sound originates, a key survival faculty that allows animals - from lizards to humans - to pinpoint the location of danger, prey and group members. In modern times, finding a lost cell phone by using the application "Find My Device," just to find it had slipped under a sofa pillow, relies on minute differences in the ringing sound that reaches the ears.

Unlike other sensory perceptions, such as feeling where raindrops hit the skin or being able to distinguish high notes from low on the piano, the direction of sounds must be computed; the brain estimates them by processing the difference in arrival time across the two ears, the so-called interaural time difference (ITD). A longstanding consensus among biomedical engineers is that humans localize sounds with a scheme akin to a spatial map or compass, with neurons aligned from left to right that fire individually when activated by a sound coming from a given angle - say, at 30 degrees leftward from the center of the head.

But in research published this month in the journal eLife, Antje Ihlefeld, director of NJIT's Neural Engineering for Speech and Hearing Laboratory, is proposing a different model based on a more dynamic neural code. The discovery offers new hope, she says, that engineers may one day devise hearing aids, now notoriously poor in restoring sound direction, to correct this deficit.

"If there is a static map in the brain that degrades and can't be fixed, that presents a daunting hurdle. It means people likely can't "relearn" to localize sounds well. But if this perceptual capability is based on a dynamic neural code, it gives us more hope of retraining peoples' brains," Ihlefeld notes. "We would program hearing aids and cochlear implants not just to compensate for an individual's hearing loss, but also based upon how well that person could adapt to using cues from their devices. This is particularly important for situations with background sound, where no hearing device can currently restore the ability to single out the target sound. We know that providing cues to restore sound direction would really help."

What led her to this conclusion is a journey of scholarly detective work that began with a conversation with Robert Shapley, an eminent neurophysiologist at NYU who remarked on a peculiarity of human binocular depth perception - the ability to determine how far away a visual object is - that also depends on a computation comparing input received by both eyes. Shapley noted that these distance estimates are systematically less accurate for low-contrast stimuli (images that are more difficult to distinguish from their surrounding) than for high-contrast ones.

Ihlefeld and Shapley wondered if the same neural principle applied to sound localization: whether it is less accurate for softer sounds than for louder ones. But this would depart from the prevailing spatial map theory, known as the Jeffress model, which holds that sounds of all volumes are processed - and therefore perceived - the same way. Physiologists, who propose that mammals rely on a more dynamic neural model, have long disagreed with it. They hold that mammalian neurons tend to fire at different rates depending on directional signals and that the brain then compares these rates across sets of neurons to dynamically build up a map of the sound environment.

"The challenge in proving or disproving these theories is that we can't look directly at the neural code for these perceptions because the relevant neurons are located in the human brainstem, so we cannot obtain high-resolution images of them," she says. "But we had a hunch that the two models would give different sound location predictions at a very low volume."

They searched the literature for evidence and found only two papers that had recorded from neural tissue at these low sounds. One study was in barn owls - a species thought to rely on the Jeffress model, based on high-resolution recordings in the birds' brain tissue - and the other study was in a mammal, the rhesus macaque, an animal thought to use dynamic rate coding. They then carefully reconstructed the firing properties of the neurons recorded in these old studies and used their reconstructions to estimate sound direction both as a function of ITD and volume.

"We expected that for the barn owl data, it really should not matter how loud a source is - the predicted sound direction should be really accurate no matter the sound volume - and we were able to confirm that. However, what we found for the monkey data is that predicted sound direction depended on both ITD and volume," she said. "We then searched the human literature for studies on perceived sound direction as a function of ITD, which was also thought not to depend on volume, but surprisingly found no evidence to back up this long-held belief."

She and her graduate student, Nima Alamatsaz, then enlisted volunteers on the NJIT campus to test their hypothesis, using sounds to test how volume affects where people think a sound emerges.

"We built an extremely quiet, sound-shielded room with specialized calibrated equipment that allowed us to present sounds with high precision to our volunteers and record where they perceived the sound to originate. And sure enough, people misidentified the softer sounds," notes Alamatsaz.

"To date, we are unable to describe sound localization computations in the brain precisely," adds Ihlefeld. "However, the current results are inconsistent with the notion that the human brain relies on a Jeffress-like computation. Instead, we seem to rely on a slightly less accurate mechanism.

More broadly, the researchers say, their studies point to direct parallels in hearing and visual perception that have been overlooked before now and that suggest that rate-based coding is a basic underlying operation when computing spatial dimensions from two sensory inputs.

"Because our work discovers unifying principles across the two senses, we anticipate that interested audiences will include cognitive scientists, physiologists and computational modeling experts in both hearing and vision," Ihlefeld says. "It is fascinating to compare how the brain uses the information reaching our eyes and ears to make sense of the world around us and to discover that two seemingly unconnected perceptions - vision and hearing - may in fact be quite similar after all."
-end-
About New Jersey Institute of Technology:

One of only 32 polytechnic universities in the United States, New Jersey Institute of Technology (NJIT) prepares students to become leaders in the technology-dependent economy of the 21st century. NJIT's multidisciplinary curriculum and computing-intensive approach to education provide technological proficiency, business acumen and leadership skills. NJIT is rated an "R1" research university by the Carnegie Classification®, which indicates the highest level of research activity. NJIT conducts approximately $170 million in research activity each year and has a $2.8 billion annual economic impact on the State of New Jersey. NJIT is ranked #1 nationally by Forbes for the upward economic mobility of its lowest-income students and is ranked 53rd out of more than 4,000 colleges and universities for the mid-career earnings of graduates, according to PayScale.com. NJIT also is ranked by U.S. News & World Report as one of the top 100 national universities.

New Jersey Institute of Technology

Related Neurons Articles from Brightsurf:

Paying attention to the neurons behind our alertness
The neurons of layer 6 - the deepest layer of the cortex - were examined by researchers from the Okinawa Institute of Science and Technology Graduate University to uncover how they react to sensory stimulation in different behavioral states.

Trying to listen to the signal from neurons
Toyohashi University of Technology has developed a coaxial cable-inspired needle-electrode.

A mechanical way to stimulate neurons
Magnetic nanodiscs can be activated by an external magnetic field, providing a research tool for studying neural responses.

Extraordinary regeneration of neurons in zebrafish
Biologists from the University of Bayreuth have discovered a uniquely rapid form of regeneration in injured neurons and their function in the central nervous system of zebrafish.

Dopamine neurons mull over your options
Researchers at the University of Tsukuba have found that dopamine neurons in the brain can represent the decision-making process when making economic choices.

Neurons thrive even when malnourished
When animal, insect or human embryos grow in a malnourished environment, their developing nervous systems get first pick of any available nutrients so that new neurons can be made.

The first 3D map of the heart's neurons
An interdisciplinary research team establishes a new technological pipeline to build a 3D map of the neurons in the heart, revealing foundational insight into their role in heart attacks and other cardiac conditions.

Mapping the neurons of the rat heart in 3D
A team of researchers has developed a virtual 3D heart, digitally showcasing the heart's unique network of neurons for the first time.

How to put neurons into cages
Football-shaped microscale cages have been created using special laser technologies.

A molecule that directs neurons
A research team coordinated by the University of Trento studied a mass of brain cells, the habenula, linked to disorders like autism, schizophrenia and depression.

Read More: Neurons News and Neurons Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.