New hearing test can improve diagnosis of middle ear disorders

November 18, 1999

SAN FRANCISCO -- A new test developed by a Nebraska researcher and studied by scientists at Ohio University could offer doctors a better diagnostic tool for middle ear infections and other hearing disorders than currently available exams.

The technique, called wide-band reflectance, could help clinicians understand how middle ear infection, hardening of the ear bones, facial nerve paralysis or other middle ear disorders impact a patient's hearing at various frequencies.

Unlike the commonly used audiogram, the new technique doesn't rely on patients to respond to a series of tones. That's a problem with small children ­ highly prone to middle ear infection ­ who may be unable to accurately participate in a hearing test.

In 34 patients with normal hearing, the wide-band reflectance test detected an acoustic reflex ­ the middle ear's reaction to loud sounds ­ at least 10 decibels lower on average than the standard hearing test, says Ohio University audiologist Patrick Feeney, who presented the study findings Friday at the American Speech-Language-Hearing Association's annual convention in San Francisco.

The findings indicate that doctors can use softer sounds in the ear canal to make a diagnosis. Some traditional exams may expose patients to intense noise during testing, which has reportedly caused hearing loss in rare cases, says Feeney, an assistant professor of hearing and speech sciences.

When the middle ear is exposed to loud sounds, the muscle contracts, which audiologists believe is a defense mechanism to protect against hearing damage. When the muscle doesn't contract, doctors suspect that the patient could be suffering from a middle ear disorder.

"If we can indicate that a person's reflex is present, it indicates that the middle ear is working well, that the facial nerve -- which innervates the muscle -- is working and that the person has a functional hearing nerve," Feeney says.

However, the traditional clinical exam, which uses a single, low-frequency 226 hertz tone to gauge the ear's health, doesn't always offer the whole picture. This test might show that a patient has no acoustic reflex. But the wide-band reflectance technique, which measures how the middle ear reacts to frequencies ranging from 250 hertz to 8,000 hertz, could show that the ear does have a reflex when measured at higher frequencies, Feeney says. That could lead to a more accurate diagnosis of and treatment for the patient's problem.

Another advantage to this new technique: A patient complaining of a middle ear problem might have a normal reading using conventional tests, but using wide-band reflectance could confirm the disorder at higher frequencies than detected through the traditional tympanogram exam.

"The reflectance technique allows the audiologist to measure the function of the middle ear over the frequency range important for hearing speech," he says.

The wide-band reflectance technique, developed by Douglas Keefe of the Boys Town National Research Hospital in Omaha, Neb., uses a series of eight chirping sounds emitted into the ear canal to determine how well the middle ear reacts to sounds that span the human speech range. Computer software analyzes the data, graphing the middle ear's performance. The procedure takes only a few seconds.

In the recent study, Feeney used wide-band reflectance to trigger a middle ear reaction an average 10.9 decibels lower than the traditional clinical method. The research grew out of an initial wide-band reflectance study on three subjects, published in October in the Journal of Speech-Language-Hearing Research.

Though Keefe developed the wide-band reflectance method in 1992, research into its clinical applications is still under way. Feeney is using the technique to study patients with otitis media, or middle ear infections, and to determine what impact aging has on the middle ear.

The research on wide-band reflectance was funded by a grant from the American Speech-Language-Hearing Foundation. Feeney holds an appointment in the College of Health and Human Services.
Written by Andrea Gibson.

Ohio University

Related Speech Articles from Brightsurf:

How speech propels pathogens
Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic.

How everyday speech could transmit viral droplets
High-speed imaging of an individual producing common speech sounds shows that the sudden burst of airflow produced from the articulation of consonants like /p/ or /b/ carry salivary and mucus droplets for at least a meter in front of a speaker.

Speech processing hierarchy in the dog brain
Dog brains, just as human brains, process speech hierarchically: intonations at lower, word meanings at higher stages, according to a new study by Hungarian researchers.

Computational model decodes speech by predicting it
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech.

Variability in natural speech is challenging for the dyslexic brain
A new study brings neural-level evidence that the continuous variation in natural speech makes the discrimination of phonemes challenging for adults suffering from developmental reading-deficit dyslexia.

How the brain controls our speech
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said.

How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.

Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.

Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.

How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.

Read More: Speech News and Speech Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to