Nav: Home

Stop -- hey, what's that sound?

November 29, 2018

You're walking along a busy city street. All around you are the sounds of subway trains, traffic, and music coming from storefronts. Suddenly, you realize one of the sounds you're hearing is someone speaking, and that you are listening in a different way as you pay attention to what they are saying.

How does the brain do this? And how quickly does it happen? Researchers at the University of Maryland are learning more about the automatic process the brain goes through when it picks up on spoken language.

Neuroscientists have understood for some time that when we hear sounds of understandable language our brains react differently than they do when we hear non-speech sounds or people talking in languages we do not know. When we hear someone talking in a familiar language, our brain quickly shifts to pay attention, process the speech sounds by turning them into words, and understand what is being said.

In a new paper published in the Cell Press/Elsevier journal Current Biology, "Rapid transformation from auditory to linguistic representations of continuous speech," Maryland researchers were able to see where in the brain, and how quickly - in milliseconds - the brain's neurons transition from processing the sound of speech to processing the language-based words of the speech.

The paper was written by Institute for Systems Research (ISR) Postdoctoral Researcher Christian Brodbeck, L. Elliot Hong of the University of Maryland School of Medicine, and Professor Jonathan Z. Simon, who has a triple appointment in the Departments of Biology and Electrical and Computer Engineering as well as ISR.

"When we listen to someone talking, the change in our brain's processing from not caring what kind of sound it is to recognizing it as a word happens surprisingly early," said Simon. "In fact, this happens pretty much as soon as the linguistic information becomes available."

When it is engaging in speech perception, the brain's auditory cortex analyzes complex acoustic patterns to detect words that carry a linguistic message. It seems to do this so efficiently, at least in part, by anticipating what it is likely to hear: by learning what sounds signal language most frequently, the brain can predict what may come next. It is generally thought that this process--localized bilaterally in the brain's superior temporal lobes--involves recognizing an intermediate, phonetic level of sound.

In the Maryland study, the researchers mapped and analyzed participants' neural brain activity while listening to a single talker telling a story. They used magnetoencephalography (MEG), a common non-invasive neuroimaging method that employs very sensitive magnetometers to record the naturally occurring magnetic fields produced by electrical currents inside the brain. The subject typically sits under or lies down inside the MEG scanner, which resembles a whole-head hair drier, but contains an array of magnetic sensors.

The study showed that the brain quickly recognizes the phonetic sounds that make up syllables and transitions from processing merely acoustic to linguistic information in a highly specialized and automated way. The brain has to keep up with people speaking at a rate of about three words a second. It achieves this, in part, by distinguishing speech from other kinds of sound in about a tenth of a second after the sound enters the ears.

"We usually think that what the brain processes this early must be only at the level of sound, without regard for language," Simon notes. "But if the brain can take knowledge of language into account right away, it would actually process sound more accurately. In our study we see that the brain takes advantage of language processing at the very earliest stage it can."

In another part of the study, the researchers found that people selectively process speech sounds in noisy environments.

Here, participants heard a mixture of two speakers in a "cocktail party" scenario, and were told to listen to one and ignore the other. The participants' brains only consistently processed language for the conversation to which they were told to pay attention, not the one they were told to ignore. Their brains stopped processing unattended speech at the level of detecting word forms.

"This may reveal a 'bottleneck' in our brains' speech perception," Brodbeck says. "We think lexical perception works by our brain considering the match between the incoming speech signal and many different words at the same time. It could be that this mechanism involves mental resources that have limitations on how many different options can be tried simultaneously, making it impossible to attend to more than one speaker at the same time."

This study lays the foundation for additional research into how our brains interpret sounds as words. For example, how and when does the brain decide which word is being said? There is evidence that the brain actually sifts through possibilities, but it is currently unknown how the brain successfully narrows down the choices to a single word and connects it with the meaning of the ongoing discourse. Also, since it is possible to measure what fraction of the speech sounds are clear enough to be processed as being components of words, the researchers may be able to test listening comprehension when subjects can't, or don't understand how to, report it properly.
-end-


University of Maryland

Related Language Articles:

The world's most spoken language is...'Terpene'
If you're small, smells are a good way to stand out.
Study analyzes what 'a' and 'the' tell us about language acquisition
A study co-authored by an MIT professor suggests that experience is an important component of early-childhood language usage although it doesn't necessarily account for all of a child's language facility.
Why do people switch their language?
Due to increasing globalization, the linguistic landscape of our world is changing; many people give up use of one language in favor of another.
Discovering what shapes language diversity
A research team led by Colorado State University is the first to use a form of simulation modeling to study the processes that shape language diversity patterns.
'Speaking my language': Method helps prepare teachers of dual language learners
Researchers at Lehigh University, led by L. Brook Sawyer and Patricia H.
The brain watched during language learning
Researchers from Nijmegen, the Netherlands, have for the first time captured images of the brain during the initial hours and days of learning a new language.
'Now-or-never bottleneck' explains language acquisition
We are constantly bombarded with linguistic input, but our brains are unable to remember long strings of linguistic information.
The secret language of microbes
Social microbes often interact with each other preferentially, favoring those that share certain genes in common.
A programming language for living cells
New language lets MIT researchers design novel biological circuits.
Syntax is not unique to human language
Human communication is powered by rules for combining words to generate novel meanings.

Related Language Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Changing The World
What does it take to change the world for the better? This hour, TED speakers explore ideas on activism—what motivates it, why it matters, and how each of us can make a difference. Guests include civil rights activist Ruby Sales, labor leader and civil rights activist Dolores Huerta, author Jeremy Heimans, "craftivist" Sarah Corbett, and designer and futurist Angela Oguntala.
Now Playing: Science for the People

#521 The Curious Life of Krill
Krill may be one of the most abundant forms of life on our planet... but it turns out we don't know that much about them. For a create that underpins a massive ocean ecosystem and lives in our oceans in massive numbers, they're surprisingly difficult to study. We sit down and shine some light on these underappreciated crustaceans with Stephen Nicol, Adjunct Professor at the University of Tasmania, Scientific Advisor to the Association of Responsible Krill Harvesting Companies, and author of the book "The Curious Life of Krill: A Conservation Story from the Bottom of the World".