Nav: Home

Move over, 'Laurel or Yanny': Study looks at why we hear talking as singing after many repetitions

June 08, 2018

LAWRENCE -- The great Laurel-or-Yanny debate of 2018 was so fun because it shined a light on the often-illusory nature of auditory perception. What you hear may not be the same as what somebody else hears. Or, perhaps what you hear could change over time. What surely was "Yanny" to some people at first sounded a lot more like "Laurel" upon their 27th listening.

New research appearing today in the peer-reviewed journal PLOS ONE explores these ideas further. A team from the University of Kansas has investigated the "Speech-to-Song Illusion," where a spoken phrase is repeated and begins to sound as if it were being sung.

"There's this neat auditory illusion called the Speech-to-Song Illusion that musicians in the '60s knew about and used to artistic effect -- but scientists didn't start investigating it until the '90s," said Michael Vitevitch, professor and chair of psychology at KU, who conducted the study with undergraduate and graduate student researchers in the department's Spoken Language Laboratory. "The illusion occurs when a spoken phrase is repeated-- but after it's repeated several times it begins to sound like it's being sung instead of spoken."

The KU researcher said previous studies have looked at characteristics of phrases that contribute to the illusion and have elicited the phenomenon in speakers of English, German and Mandarin. Further studies have shown brain regions that process speech to be active when a phrase is perceived as speech while brain regions that process music fire when the phrase is heard as song.

"But nobody had a good explanation about how this illusion was coming about in the first place," Vitevitch said. "A lot of the researchers who looked at this were music-perception scientists, but there weren't a lot of people coming at it from the speech-and-language side. I'm one of the few speech people that started looking at this. I brought some of these models of how language processing works to see if that might explain what's going on with this illusion."

Along with KU graduate student Nichol Castro and undergraduates Joshua Mendoza and Elizabeth Tampke, Vitevitch designed six studies to test if a model of language processing known as Node Structure Theory that accounts for other aspects of language processing might also be responsible for the Speech-to-Song Illusion. Under Node Structure Theory, word nodes and syllable nodes act as "detectors" when people hear syllables, words and phrases.

"You've got word detectors and syllable detectors and, like with lots of things in life, as you use them they're going to get worn out -- like your muscles. As you use them, they get tired," said Vitevitch. "Like with muscles, you have a type of muscle for short bursts of sprinting and also muscles for endurance, like running a marathon. Word nodes are like sprinting muscles, and syllable nodes are like endurance muscles."

Vitevitch said the results of six experiments -- which used 30 KU students as subjects -- suggest word detectors initially are activated, giving one the perception of speech, but they fatigue as the phrase is repeated. The continued presentation of a phrase still activates syllable detectors, which do not fatigue as quickly as the word detectors. Because syllables carry the rhythmic information of language, the continued stimulation of the syllable detectors -- but not the word detectors -- shifts perception to a songlike state.

"We tried to test the different parts of the model," he said. "We looked at the word nodes and singled out phrases that had a lot of similar-sounding words. We tried to take out words altogether by using Spanish words with non-Spanish speakers. We tried focusing on the syllables and number of syllables. We looked at different characteristics, like is it the word that matters or the number of syllables?"

The authors even created random lists of words to prevent the inherent intonation in everyday speech from influencing the subjects' perception of musicality.

"Because we do have intonation, we wanted to have the strongest-possible test of the mechanism of these detectors," said Vitevitch. "We tried to strip musicality away by randomly putting words together without intonation shifts, so it didn't sound musical at all to begin with. When people hear it once, they said it didn't sound musical at all. The fact that we could get people to shift perception to something musical after several repetitions gives us confidence that we're on the right track with the mechanism explaining the effect."

While the Speech-to-Song Illusion could be seen as a mere novelty, like the Laurel-or-Yanny meme, Vivevitch said the phenomenon has the potential to greatly increase our fundamental understanding of speech and music perception.

"All scientists are trying to look inside of a black box to understand what's going on inside," he said. "We're all trying to understand the universe or the brain or how atoms work. So, any opportunity to get a crack in the black box where you can look inside, you need to take. Things like illusions are often dismissed, but they're unique opportunities to get another angle on what's going on. Yes, they're kind of fun and interesting and goofy and they get attention -- but really they're another opportunity to see what's going on inside the black box."
-end-


University of Kansas

Related Language Articles:

Human language most likely evolved gradually
One of the most controversial hypotheses for the origin of human language faculty is the evolutionary conjecture that language arose instantaneously in humans through a single gene mutation.
'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.
How does language emerge?
How did the almost 6000 languages of the world come into being?
New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.
Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.
Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.
Sign language reveals the hidden logical structure, and limitations, of spoken language
Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.
Lying in a foreign language is easier
It is not easy to tell when someone is lying.
More Language News and Language Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Climate Mindset
In the past few months, human beings have come together to fight a global threat. This hour, TED speakers explore how our response can be the catalyst to fight another global crisis: climate change. Guests include political strategist Tom Rivett-Carnac, diplomat Christiana Figueres, climate justice activist Xiye Bastida, and writer, illustrator, and artist Oliver Jeffers.
Now Playing: Science for the People

#562 Superbug to Bedside
By now we're all good and scared about antibiotic resistance, one of the many things coming to get us all. But there's good news, sort of. News antibiotics are coming out! How do they get tested? What does that kind of a trial look like and how does it happen? Host Bethany Brookeshire talks with Matt McCarthy, author of "Superbugs: The Race to Stop an Epidemic", about the ins and outs of testing a new antibiotic in the hospital.
Now Playing: Radiolab

Speedy Beet
There are few musical moments more well-worn than the first four notes of Beethoven's Fifth Symphony. But in this short, we find out that Beethoven might have made a last-ditch effort to keep his music from ever feeling familiar, to keep pushing his listeners to a kind of psychological limit. Big thanks to our Brooklyn Philharmonic musicians: Deborah Buck and Suzy Perelman on violin, Arash Amini on cello, and Ah Ling Neu on viola. And check out The First Four Notes, Matthew Guerrieri's book on Beethoven's Fifth. Support Radiolab today at Radiolab.org/donate.