Study looks at benefits of 2 cochlear implants in deaf children

February 13, 2007

MADISON -- Nature has outfitted us with a pair of ears for good reason: having two ears enhances hearing. University of Wisconsin-Madison scientists are now examining whether this is also true for the growing numbers of deaf children who've received not one, but two, cochlear implants to help them hear.

Led by Ruth Litovsky, an investigator in the UW-Madison Waisman Center, the team's research suggests that deaf children who have a cochlear implant in each ear more accurately locate sounds when they use both implants instead of one. Children with two implants also become more skilled at localizing sound over time.

The results were presented today (Feb. 13) at the Annual Midwinter Meeting of the Association for Research in Otolaryngology.

Information like this can be useful, says Litovsky, when doctors and parents are deciding whether a child should get one or two of the electronic devices, which allow deaf people to hear by bypassing the damaged inner ear, or cochlea, to stimulate the auditory nerve directly.

It's not a simple choice. A single implant and the required surgery can cost $50,000. The device also permanently damages the cochlea, which might prevent recipients from taking advantage of potentially superior treatments for deafness down the road.

Patients never received more than one implant until about ten years ago. Then, doctors began to fit people with two, hoping this would assist them in understanding speech, especially in "cocktail party" environments with lots of competing sounds. "But there are still many remaining questions about the actual extent of the benefits of having two cochlear implants," Litovsky says.

Only about three percent of the 100,000 people worldwide who currently wear implants have received two, she estimates.

Litovsky is an expert in binaural hearing, or hearing with two ears. "We try to understand how having two ears is helpful," she says. One main benefit: two ears make it easier to locate sounds. "If you close an ear, walk around and try to identify where sounds are coming from, it's very, very hard," she says.

To test whether a pair of cochlear implants aids this ability, Litovsky's team has, to date, studied 55 deaf children who received a second implant one to seven years after being fitted with their first.

When the research began, it appeared the group of 5 to 14 year-olds couldn't localize sounds at all, Litovsky says. The result prompted her to launch a longitudinal study designed not only to test their prowess at this task, but also how it changed over time.

In the "listening game" she has devised with her team, children face a semicircle of loudspeakers arranged at regular intervals, each with a picture attached. When speech or other kinds of sounds emit from a speaker, the children are scored on their ability to identify the correct one by pointing to its picture.

In addition to completing the task while wearing both implants, the children were asked to remove the microphone and other external parts of one, rendering them deaf again in that ear.

"That turns out to be an interesting experience, because they don't like to remove an implant," says Litovsky. "We have to barter for that, with M&Ms or something else that motivates them."

Although variability existed among the children, the study indicates that most did develop the ability to locate speech and other sounds more accurately when using two cochlear implants versus one. This capability also increased with experience. "We're now seeing that the ability to localize sounds takes time to emerge," says Litovsky. "What seems to get better is the integration of the information from the two ears in the brain."

Another crucial question is whether children should receive both implants simultaneously, at the same time, or sequentially, at different times, she says. The study's results have implications here, as well.

"The children we're looking at received their implants sequentially," says Litovsky, "and we think that their brains took a very long time to combine the inputs from the two ears." Yet, the fact they learned to do so points to the brain's adaptability, or "plasticity," she adds. "It reveals that the brain is still open to input from an ear that was deaf for a very long time."

Litovsky emphasizes that her goal is not to tell parents or doctors whether two implants are better for children, but to work with families who have made that choice and study the outcomes.

"I think so far our work has helped inform clinicians about these decisions," she says. "So I hope in the future we'll be able to continue to do that." Litovsky's research is funded by the National Institute on Deafness and Other Communication Disorders.

Madeline Fisher, (608) 890-0465,

University of Wisconsin-Madison

Related Speech Articles from Brightsurf:

How speech propels pathogens
Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic.

How everyday speech could transmit viral droplets
High-speed imaging of an individual producing common speech sounds shows that the sudden burst of airflow produced from the articulation of consonants like /p/ or /b/ carry salivary and mucus droplets for at least a meter in front of a speaker.

Speech processing hierarchy in the dog brain
Dog brains, just as human brains, process speech hierarchically: intonations at lower, word meanings at higher stages, according to a new study by Hungarian researchers.

Computational model decodes speech by predicting it
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech.

Variability in natural speech is challenging for the dyslexic brain
A new study brings neural-level evidence that the continuous variation in natural speech makes the discrimination of phonemes challenging for adults suffering from developmental reading-deficit dyslexia.

How the brain controls our speech
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said.

How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.

Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.

Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.

How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.

Read More: Speech News and Speech Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to