Nav: Home

Researchers find a neural 'auto-correct' feature we use to process ambiguous sounds

August 22, 2018

Our brains have an "auto-correct" feature that we deploy when re-interpreting ambiguous sounds, a team of scientists has discovered. Its findings, which appear in the Journal of Neuroscience, point to new ways we use information and context to aid in speech comprehension.

"What a person thinks they hear does not always match the actual signals that reach the ear," explains Laura Gwilliams, a doctoral candidate in NYU's Department of Psychology, a researcher at the Neuroscience of Language Lab at NYU Abu Dhabi, and the paper's lead author. "This is because, our results suggest, the brain re-evaluates the interpretation of a speech sound at the moment that each subsequent speech sound is heard in order to update interpretations as necessary.

"Remarkably, our hearing can be affected by context occurring up to one second later, without the listener ever being aware of this altered perception."

"For example, an ambiguous initial sound, such as 'b' and 'p,' is heard one way or another depending on if it occurs in the word 'parakeet' or 'barricade,' " adds Alec Marantz, principal investigator of the project, a professor in NYU's departments of Linguistics and Psychology, and co-director of NYU Abu Dhabi's Neuroscience of Language Lab, where the research was conducted. "This happens without conscious awareness of the ambiguity, even though the disambiguating information doesn't come until the middle of the third syllable."

For examples of these stimuli, please click here: http://lauragwilliams.github.io/postdiction_stimuli.

The study--the first to unveil how the brain uses information gathered after an initial sound is detected to aid speech comprehension--also included David Poeppel, a professor of Psychology and Neural Science, and Tal Linzen, an assistant professor in Johns Hopkins University's Department of Cognitive Science.

It's well known that the perception of a speech sound is determined by its surrounding context--in the form of words, sentences, and other speech sounds. In many instances, this contextual information is heard later than the initial sensory input.

This plays out in every-day life--when we talk, the actual speech we produce is often ambiguous. For example, when a friend says she has a "dent" in her car, you may hear "tent." Although this kind of ambiguity happens regularly, we, as listeners, are hardly aware of it.

"This is because the brain automatically resolves the ambiguity for us--it picks an interpretation and that's what we perceive to hear," explains Gwilliams. "The way the brain does this is by using the surrounding context to narrow down the possibilities of what the speaker may mean."

In the Journal of Neuroscience study, the researchers sought to understand how the brain uses this subsequent information to modify our perception of what we initially heard.

To do this, they conducted a series of experiments in which the subjects listened to isolated syllables and similarly sounding words (e.g., barricade, parakeet). In order to gauge the subjects' brain activity, the scientists deployed magnetoencephalography (MEG), a technique that maps neural movement by recording magnetic fields generated by the electrical currents produced by our brain.

Their results yielded three primary findings:
  • The brain's primary auditory cortex is sensitive to how ambiguous a speech sound is at just 50 milliseconds after the sound's onset.
  • The brain "re-plays" previous speech sounds while interpreting subsequent ones, suggesting re-evaluation as the rest of the word unfolds
  • The brain makes commitments to its "best guess" of how to interpret the signal after about half a second.


"What is interesting is the fact that this context can occur after the sounds being interpreted and still be used to alter how the sound is perceived," Gwilliams adds.

For example, the same sound will be perceived as "k" at the onset of "kiss" and "g" at the onset of "gift," even though the difference between the words ("ss" vs. "ft") come after the ambiguous sound.

"Specifically, we found that the auditory system actively maintains the acoustic signal in auditory cortex, while concurrently making guesses about the identity of the words being said," says Gwilliams. "Such a processing strategy allows the content of the message to be accessed quickly, while also permitting re-analysis of the acoustic signal to minimize hearing mistakes."
-end-
This research was supported by the NYU Abu Dhabi Research Institute (G1001), the European Research Council (ERC-2011-AdG 295810 BOOTPHON), France's National Research Agency (ANR-10-IDEX-0001-02 PSL, ANR-10-LABX-0087 IEC), and the National Institutes of Health (2R01DC05660).

DOI: https://doi.org/10.1523/JNEUROSCI.0065-18.2018

New York University

Related Neuroscience Articles:

Three ways neuroscience can advance the concussion debate
While concussion awareness has improved over the past decade, understanding the nuances of these sports injuries, their severity, symptoms, and treatment, is still a work in progress.
Study shows rapid growth in neuroscience research
A study of the impact and research topics of neuroscience papers from 2006-2015 has shown that the number of neuroscience papers and highly-productive core neuroscience journals has grown, while psychology and behavioral sciences have become more popular research areas.
CNS 2017: Big Ideas in Cognitive Neuroscience
Press registration is now open for the Cognitive Neuroscience Society annual conference, March 25-28, 2017, in San Francisco, CA, at the Hyatt Regency.
Jounrnal of Neuroscience: Highlights from the November 9 issue
Check out these newsworthy symposia featured in the Nov. 9, 2016, issue of the Journal of Neuroscience.
Awards event to recognize heroes of green and open neuroscience
On Monday, November 14, at 6:30 p.m., the Physicians Committee for Responsible Medicine and the founders of the Green Neuroscience Laboratory will recognize individuals and organizations that help forward the principles and culture of Green and Open Neuroscience.
Robotic cleaning technique could automate neuroscience research
For scientists listening in on the faint whispers of brain neurons, a first-ever robotic technique for cleaning the tiny devices that record the signals could facilitate a new level of automation in neuroscience research.
29th ECNP Congress for Applied and Translational Neuroscience
Europe's largest meeting in applied and translational neuroscience, the 29th ECNP Congress of the European College of Neuropsychopharmacology (ECNP) will take place at the Austria Center Vienna from Sept.
Neuroscience 2016 media registration now open
San Diego becomes the epicenter of neuroscience in November as 30,000 researchers, clinicians, and advocates from around the world gather November 12-16 to explore and share the latest developments in brain research.
A vision for revamping neuroscience education
The expanding scope and growing number of tools used for neuroscience is moving beyond what is taught in traditional graduate programs, say leaders in American neuroscience education, funding, and policy.
Is educational neuroscience a waste of money?
Educational neuroscience has little to offer schools or children's education, according to new research from the University of Bristol, UK.

Related Neuroscience Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Setbacks
Failure can feel lonely and final. But can we learn from failure, even reframe it, to feel more like a temporary setback? This hour, TED speakers on changing a crushing defeat into a stepping stone. Guests include entrepreneur Leticia Gasca, psychology professor Alison Ledgerwood, astronomer Phil Plait, former professional athlete Charly Haversat, and UPS training manager Jon Bowers.
Now Playing: Science for the People

#524 The Human Network
What does a network of humans look like and how does it work? How does information spread? How do decisions and opinions spread? What gets distorted as it moves through the network and why? This week we dig into the ins and outs of human networks with Matthew Jackson, Professor of Economics at Stanford University and author of the book "The Human Network: How Your Social Position Determines Your Power, Beliefs, and Behaviours".