Nav: Home

When it comes to hearing words, it's a division of labor between our brain's two hemispheres

March 04, 2019

Scientists have uncovered a new "division of labor" between our brain's two hemispheres in how we comprehend the words and other sounds we hear--a finding that offers new insights into the processing of speech and points to ways to address auditory disorders.

"Our findings point to a new way to think about the division of labor between the right and left hemispheres," says Adeen Flinker, the study's lead author and an assistant professor in the Department of Neurology at NYU School of Medicine. "While both hemispheres perform overlapping roles when we listen, the left hemisphere gauges how sounds change in time--for example when speaking at slower or faster rates--while the right is more attuned to changes in frequency, resulting in alterations in pitch."

Clinical observations dating back to the 19th century have shown that damage to the left, but not right, hemisphere impairs language processing. While researchers have offered an array of hypotheses on the roles of the left and right hemispheres in speech, language, and other aspects of cognition, the neural mechanisms underlying cerebral asymmetries remain debated.  

In the study, which appears in the journal Nature Human Behavior, the researchers sought to elucidate the mechanisms underlying the processing of speech, with the larger aim of furthering our understanding of basic mechanisms of speech analysis as well as enriching the diagnostic and treatment tools for language disorders.

To do so, they created new tools to manipulate recorded speech, then used these recordings in a set of five experiments spanning behavioral experiments and two types of brain recording. They used magnetoencephalography (MEG), which allows measurements of the tiny magnetic fields generated by brain activity, as well as electrocorticography (ECoG), recordings directly from within the brain in volunteer surgical patients.

"We hope this approach will provide a framework to highlight the similarities and differences between human and non-human processing of communication signals," adds Flinker. "Furthermore, the techniques we provide to the scientific community may help develop new training procedures for individuals suffering from damage to one hemisphere."
-end-
The study's other authors were Werner Doyle, an associate professor in the Department of Neurosurgery at NYU School of Medicine, Ashesh Mehta, an associate professor of neurosurgery at the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Orrin Devinsky, a professor in the Department of Neurology at NYU School of Medicine, and David Poeppel, a professor of psychology and neuroscience at NYU and director of the Max Planck Institute for Empirical Aesthetics in Frankfurt and the study's senior author.

This work was supported, in part, by grants from the National Institutes of Health (F32 DC011985, 2R01DC05660), the National Institutes of Mental Health (R21 MH114166-01), and the Charles H. Revson Foundation.

DOI: 10.1038/s41562-019-0548-z

New York University

Related Language Articles:

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.
Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.
Sign language reveals the hidden logical structure, and limitations, of spoken language
Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.
More Language News and Language Current Events

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Erasing The Stigma
Many of us either cope with mental illness or know someone who does. But we still have a hard time talking about it. This hour, TED speakers explore ways to push past — and even erase — the stigma. Guests include musician and comedian Jordan Raskopoulos, neuroscientist and psychiatrist Thomas Insel, psychiatrist Dixon Chibanda, anxiety and depression researcher Olivia Remes, and entrepreneur Sangu Delle.
Now Playing: Science for the People

#537 Science Journalism, Hold the Hype
Everyone's seen a piece of science getting over-exaggerated in the media. Most people would be quick to blame journalists and big media for getting in wrong. In many cases, you'd be right. But there's other sources of hype in science journalism. and one of them can be found in the humble, and little-known press release. We're talking with Chris Chambers about doing science about science journalism, and where the hype creeps in. Related links: The association between exaggeration in health related science news and academic press releases: retrospective observational study Claims of causality in health news: a randomised trial This...