UCSF Researchers Offer More Precise Explanation For A Language Disorder Affecting Millions Of Children

May 07, 1997

A startling new finding made by researchers from the University of California San Francisco strengthens evidence that a disorder which prevents millions of U.S. children from keeping up in the classroom stems from an inability to process sound normally, and not from problems that originate primarily in higher brain regions as many other scientists had earlier proposed.

The discovery, reported in the May 8 issue of the scientific journal Nature, holds the key to easier diagnosis of the disorder, called specific language impairment, the researchers say. The finding already is being incorporated into the design of new training programs to improve language-learning skills in language-impaired children.

Children with specific language impairment fall behind in language skills despite normal intelligence and apparently normal hearing. The source of the problem has been poorly understood, and standard speech therapy techniques used to treat the disorder have proved unsatisfactory.

The researchers found that children with specific language impairment are affected much more profoundly than unimpaired children by a phenomenon known as "masking." Masking refers to a natural limitation in the human ability to detect any particular sound that is presented simultaneously or within a small fraction of a second of other "masking" sounds.

The new study refines scientific understanding of why the language-impaired children find it difficult to distinguish individual sounds presented during normal speech. The study specifies more clearly the cause of the impairment, which in turn leads to slow progress in school.

Speech fits into the category of masking because it consists of a stream of auditory stimuli occurring sequentially in time. In normal individuals the masking of individual speech sounds by preceding or following sounds is not sufficient to impair speech processing.

However, Beverly Wright, PhD, found that, compared to normal children, those with the disorder in some masking situations required that tones be about 45 decibels more intense before they could be heard over masking noise. This is comparable to the difference between the sound level in a quiet room and at the side of a superhighway, Wright says.

Wright conceived and conducted the study as a UCSF research scientist in a group led by Michael Merzenich, PhD, a professor of otolaryngology and a research scientist with the W.M. Keck Foundation Center for Integrative Neuroscience at UCSF. Wright, who is trained in acoustics and psychology, collaborated with researchers at the University of Florida to design and conduct testing. Wright recently completed her UCSF research and has taken a position as an assistant professor of audiology and hearing sciences at Northwestern University, in Evanston, Ill.

"For a tone to be detected in the presence of masking noise, it must be more intense than usual or else must be separated in time or frequency from the masking noise," Wright says. "We observed that the language-impaired children were consistently poorer than the controls at detecting a brief tone presented with a masking noise, particularly when the brief tone was turned on immediately prior to the masking noise. This phenomenon is called backward masking."

Although Wright studied only eight language-impaired children and eight unimpaired children for comparison, the differences in their responses was significant, she says. Despite their distinctively different responses to the tests Wright administered, the differences between normal and language-impaired children would not be detected on standard hearing tests given in school, she says.

Merzenich estimates that specific language impairment affects about ten percent of children. "This study provides a basis for early identification of this disorder, and helps us to define the problem more precisely," he says. "It also has implications for how we might better treat language-impaired children. Additionally, the study may lead to new insights regarding the neurological basis of this disorder and of language processing in general."

Although masking is a phenomenon that has been well known for decades to experts in psychoacoustics and is presented in standard textbooks in that field, it apparently had not been investigated until now in children with specific language impairment.

Wright and Merzenich say the new results offer a refinement and elaboration of earlier findings by Paula Tallal, PhD, professor and co-director of The Center for Molecular and Behavioral Neuroscience at Rutgers University.

Tallal was the first to propose and demonstrate that a difficulty in understanding rapidly changing speech frequencies is the fundamental source of language problems in children with specific language learning impairment. She found that the children are deficient in their ability to perceive certain consonant sounds in normal speech, such as "ba" and "da."

In his research, Merzenich studies changes in sensory systems as a way to learn more about the brain's "plasticity," which is its potential to establish new networks of connections between nerve cells in support of a variety of brain functions. His interest in this area earlier led to the development of the cochlear implant, a device that permits the deaf to hear sound and even to understand speech, and led to a collaboration with Tallal.

In two articles published in the January 3, 1996 issue of Science, the collaborators described studies of a new training technique they developed to speed sound-processing and improve language skills in children with specific language impairment. Their approach relies on slowing and amplifying the difficult-to-process sounds. The modified speech and other sounds used in the training were incorporated into colorful and engaging computer games.

The rapid improvements they measured, typically two grade levels in four weeks, were incompatible with the idea that the disorder could arise in higher processing centers in the brain, the researchers concluded.

Additional authors of the current Nature study include Linda Lombardino, professor of communication processes and disorders; Wayne King and Cynthia Puranik, graduate students; and Christiana Leonard, professor of neuroscience, all of the University of Florida.

Last year, UCSF granted an exclusive worldwide license for commercialization of the training technique and its computer software applications to Scientific Learning Corporation (SLC), a privately held San Francisco company.

For more information about Scientific Learning Corporation or its products, call (415) 296-1470, or contact its web site at http://www.scilearn.com. The address for the UCSF web site on language-based learning disabilities is http://www.ld-ucsf.edu.
-end-


University of California - San Francisco

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.