Hearing aid signal not clear? Then switch frequency to FM, finds UCI study

January 25, 2005

Irvine, Calif., Jan. 25, 2005 -- There's a reason why we listen to music on the FM dial of our radios - it just sounds better than it does on AM.

And this reason also holds true for cochlear implants and hearing aids. UC Irvine School of Medicine researchers have found that improving frequency modulation, or FM, reception on cochlear implants and hearing aids may increase the quality of life for the millions of Americans who use these devices.

Dr. Fan-Gang Zeng and his colleagues at UCI and the Peking Union Medical College Hospital in Beijing discovered that enhancing the detection of frequency modulation may significantly boost the performance of many hearing aids and automatic speech recognition devices by separating and blocking out background noise and increasing tonal recognition, which is essential to hearing music and certain spoken languages. Study results appear this week in the early online edition of Proceedings of the National Academy of Sciences.

Some 30 million Americans have some form of hearing loss, and some 4 million of these people benefit from using hearing aids or cochlear implants. But limitation on sound quality and overamplification of background sound can hinder their uses.

"Many hearing-aid - particularly cochlear-implant - users have trouble enjoying music or listening to conversation in a crowded room," said Zeng, research director of the Hearing and Speech Lab at UCI. "But we've found that FM modifications to both existing and future devices may overcome these difficulties."

Known as a leading expert in cochlear-implant research, Zeng and his colleagues looked into the reasons behind these limitations, specifically focusing on the two parameters of sound: amplitude (the height of a sound wave) and the frequency (the number of sound waves per unit of time).

Thirty-four normal-hearing and 18 cochlear-implant subjects participated in the study. They were tested on three speech-perception tasks known to be notoriously difficult for cochlear-implant users: speech recognition with a competing voice, speaker recognition and Mandarin-tone recognition. The researchers tested the amplitude modulation (AM) and FM from a number of frequency bands in speech sounds and tested the relative contributions to speech recognition in acoustic and electric hearing.

They found that AM works well in quiet environments but less well where background noise is present. In turn, FM enhances speech, tone and speaker recognition when other noise was present, and overall provided a better quality of tonal sound than AM does. Current cochlear implants extract only AM information, limiting significantly their performance under realistic listening situations.

These FM modifications, Zeng adds, can particularly assist Asians and Africans who speak tonal languages, such as Mandarin, in which tonal variations are vitally important. "As with your radio, music sounds better on the FM dial, and enhancing the FM reception on hearing devices can go a long way to helping people listen to and enjoy the beautiful music of their everyday lives in ways they've been unable to do," Zeng said.
-end-
Kaibo Nie, Ginger S. Stickney, Ying-Yee Kong, Michael Vongphoe and Ashish Bhargave of UCI and Chaogang Wei and Keli Cao of the Peking Union Medical College Hospital assisted with the study. The National Institutes of Health and the Chinese National Natural Science Foundation provided support.

http://today.uci.edu/news/release_detail.asp?key=1264

About the University of California, Irvine: The University of California, Irvine is a top-ranked public university dedicated to research, scholarship and community service. Founded in 1965, UCI is among the fastest-growing University of California campuses, with more than 24,000 undergraduate and graduate students and about 1,400 faculty members. The second-largest employer in dynamic Orange County, UCI contributes an annual economic impact of $3 billion.

UCI maintains an online directory of faculty available as experts to the media. To access, visit www.today.uci.edu/experts.

University of California - Irvine

Related Speech Articles from Brightsurf:

How speech propels pathogens
Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic.

How everyday speech could transmit viral droplets
High-speed imaging of an individual producing common speech sounds shows that the sudden burst of airflow produced from the articulation of consonants like /p/ or /b/ carry salivary and mucus droplets for at least a meter in front of a speaker.

Speech processing hierarchy in the dog brain
Dog brains, just as human brains, process speech hierarchically: intonations at lower, word meanings at higher stages, according to a new study by Hungarian researchers.

Computational model decodes speech by predicting it
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech.

Variability in natural speech is challenging for the dyslexic brain
A new study brings neural-level evidence that the continuous variation in natural speech makes the discrimination of phonemes challenging for adults suffering from developmental reading-deficit dyslexia.

How the brain controls our speech
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said.

How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.

Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.

Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.

How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.

Read More: Speech News and Speech Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.