Nav: Home

People track when talkers say 'uh' to predict what comes next

March 06, 2019

Spontaneous conversation is riddled with disfluencies such as pauses and 'uhm's: on average people produce 6 disfluencies every 100 words. But disfluencies do not occur randomly. Instead, 'uh' typically occurs before 'hard-to-name' low-frequency words ('uh... automobile'). Previous experiments led by Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics have shown that people can use disfluencies to predict upcoming low-frequency words. But Bosker and his colleagues went one step further. They tested whether listeners would actively track the occurrence of 'uh', even when it appeared in unexpected places.

Click on uh... the igloo

The researchers used eye-tracking, which measures people's looks towards objects on a screen. Two groups of Dutch participants saw two images on a screen (for instance, a hand and an igloo) and heard both fluent and disfluent instructions. However, one group heard a 'typical' talker say 'uh' before 'hard-to-name' low-frequency words ("Click on uh... the igloo"), while the other group heard an 'atypical' talker saying 'uh' before 'easy-to-name' high-frequency words ("Click on uh... the hand"). Would people in this second group track the unexpected occurrences of 'uh' and learn to look at the 'easy-to-name' object?

As expected, participants listening to the 'typical' talker already looked at the igloo upon hearing the disfluency ('uh'...; that is well before hearing 'igloo'). Interestingly, people listening to the 'atypical' talker learned to adjust this 'natural' prediction. Upon hearing a disfluency ('uh'...), they learnt to look at the common object, even before hearing the word itself ('hand'). "We take this as evidence that listeners actively keep track of when and where talkers say 'uh' in spoken communication, adjusting what they predict will come next for different talkers", concludes Bosker.

Speakers with a foreign accent

Would listeners also adjust their expectations with a non-native speaker? In a follow-up experiment, the same sentences were spoken by someone with a heavy Romanian accent. In this experiment, participants did learn to predict uncommon objects from a 'typical' non-native talker (saying 'uh' before low-frequency words). However, they did not learn to predict high-frequency referents from an 'atypical' non-native talker (saying 'uh' before high-frequency words) - even though the sentence materials were the same in the native vs. non-native experiment.

Geertje van Bergen, co-author on the paper, explains: "This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying 'uh' before common words like "hand" and "car") led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch. As such, they presumably took the non-native disfluencies to not be predictive of the word to follow - in spite of the clear distributional cues indicating otherwise". This finding is interesting, as it reveals an interplay between 'disfluency tracking' and 'pragmatic inferencing': we only track disfluencies if we infer from the talker's voice that the talker is a 'reliable' uhm'er.

A hot topic in psycholinguistics

According to the authors, this is the first evidence of distributional learning in disfluency processing. "We've known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say 'uh' on a moment by moment basis, adjusting their predictions about what will come next", explains Bosker. Distributional learning has been a hot topic in psycholinguistics the past few years. "We extend this field with evidence for distributional learning of metalinguistic performance cues, namely disfluencies - highlighting the wide scope of distributional learning in language processing."
-end-
Publication

Hans Rutger Bosker, Marjolein Van Os, Rik Does & Geertje Van Bergen (in press). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language.

doi: 10.1016/j.jml.2019.02.006.

Fulltext: https://pure.mpg.de/pubman/item/item_3029110

Questions? Contact:

Hans Rutger Bosker
Phone: +31 24 3521373
Email: HansRutger.Bosker@mpi.nl

Marjolein Scherphuis (press officer)
Phone: +31 24 3521947
Email: Marjolein.Scherphuis@mpi.nl

Max Planck Institute for Psycholinguistics

Related Hearing Articles:

This fly's incredible hearing is a curiosity to those developing better hearing aids
U of T Scarborough biologists study fly to develop better hearing aids.
Gene may hold key to hearing recovery
Researchers have discovered that a protein implicated in human longevity may also play a role in restoring hearing after noise exposure.
Study shows hearing tests miss common form of hearing loss
Traditional clinical hearing tests often fail to diagnose patients with a common form of inner ear damage that might otherwise be detected by more challenging behavioral tests, according to the findings of a University at Buffalo-led study published in the journal Frontiers in Neuroscience.
When writing interferes with hearing
A cochlear implant is an electronic device capable of restoring hearing in a profoundly deaf person by stimulating the nerve endings in the inner ear.
Second cause of hidden hearing loss identified
Some people can pass a hearing test but have trouble understanding speech in a noisy environment.
Pop! goes the hearing, balloon study suggests
A common birthday party favor can blow up into a problem for children -- but also a bigger conversation about hearing loss, say University of Alberta researchers.
MED-EL convenes global hearing researchers for age-related hearing loss workshop
Leading scientists and hearing experts from around the world will gather for a scientific workshop sponsored by hearing implant leader MED-EL.
Iron deficiency anemia associated with hearing loss
In a study published online by JAMA Otolaryngology-Head & Neck Surgery, Kathleen M.
New research could help build better hearing aids
Scientists at Binghamton University, State University of New York want to improve sensor technology critical to billions of devices made every year.
Hearing test may identify autism risk
Researchers have identified an inner ear deficiency in children with Autism that may impact their ability to recognize speech.

Related Hearing Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Setbacks
Failure can feel lonely and final. But can we learn from failure, even reframe it, to feel more like a temporary setback? This hour, TED speakers on changing a crushing defeat into a stepping stone. Guests include entrepreneur Leticia Gasca, psychology professor Alison Ledgerwood, astronomer Phil Plait, former professional athlete Charly Haversat, and UPS training manager Jon Bowers.
Now Playing: Science for the People

#524 The Human Network
What does a network of humans look like and how does it work? How does information spread? How do decisions and opinions spread? What gets distorted as it moves through the network and why? This week we dig into the ins and outs of human networks with Matthew Jackson, Professor of Economics at Stanford University and author of the book "The Human Network: How Your Social Position Determines Your Power, Beliefs, and Behaviours".