Nav: Home

People track when talkers say 'uh' to predict what comes next

March 06, 2019

Spontaneous conversation is riddled with disfluencies such as pauses and 'uhm's: on average people produce 6 disfluencies every 100 words. But disfluencies do not occur randomly. Instead, 'uh' typically occurs before 'hard-to-name' low-frequency words ('uh... automobile'). Previous experiments led by Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics have shown that people can use disfluencies to predict upcoming low-frequency words. But Bosker and his colleagues went one step further. They tested whether listeners would actively track the occurrence of 'uh', even when it appeared in unexpected places.

Click on uh... the igloo

The researchers used eye-tracking, which measures people's looks towards objects on a screen. Two groups of Dutch participants saw two images on a screen (for instance, a hand and an igloo) and heard both fluent and disfluent instructions. However, one group heard a 'typical' talker say 'uh' before 'hard-to-name' low-frequency words ("Click on uh... the igloo"), while the other group heard an 'atypical' talker saying 'uh' before 'easy-to-name' high-frequency words ("Click on uh... the hand"). Would people in this second group track the unexpected occurrences of 'uh' and learn to look at the 'easy-to-name' object?

As expected, participants listening to the 'typical' talker already looked at the igloo upon hearing the disfluency ('uh'...; that is well before hearing 'igloo'). Interestingly, people listening to the 'atypical' talker learned to adjust this 'natural' prediction. Upon hearing a disfluency ('uh'...), they learnt to look at the common object, even before hearing the word itself ('hand'). "We take this as evidence that listeners actively keep track of when and where talkers say 'uh' in spoken communication, adjusting what they predict will come next for different talkers", concludes Bosker.

Speakers with a foreign accent

Would listeners also adjust their expectations with a non-native speaker? In a follow-up experiment, the same sentences were spoken by someone with a heavy Romanian accent. In this experiment, participants did learn to predict uncommon objects from a 'typical' non-native talker (saying 'uh' before low-frequency words). However, they did not learn to predict high-frequency referents from an 'atypical' non-native talker (saying 'uh' before high-frequency words) - even though the sentence materials were the same in the native vs. non-native experiment.

Geertje van Bergen, co-author on the paper, explains: "This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying 'uh' before common words like "hand" and "car") led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch. As such, they presumably took the non-native disfluencies to not be predictive of the word to follow - in spite of the clear distributional cues indicating otherwise". This finding is interesting, as it reveals an interplay between 'disfluency tracking' and 'pragmatic inferencing': we only track disfluencies if we infer from the talker's voice that the talker is a 'reliable' uhm'er.

A hot topic in psycholinguistics

According to the authors, this is the first evidence of distributional learning in disfluency processing. "We've known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say 'uh' on a moment by moment basis, adjusting their predictions about what will come next", explains Bosker. Distributional learning has been a hot topic in psycholinguistics the past few years. "We extend this field with evidence for distributional learning of metalinguistic performance cues, namely disfluencies - highlighting the wide scope of distributional learning in language processing."
-end-
Publication

Hans Rutger Bosker, Marjolein Van Os, Rik Does & Geertje Van Bergen (in press). Counting 'uhm's: how tracking the distribution of native and non-native disfluencies influences online language comprehension. Journal of Memory and Language.

doi: 10.1016/j.jml.2019.02.006.

Fulltext: https://pure.mpg.de/pubman/item/item_3029110

Questions? Contact:

Hans Rutger Bosker
Phone: +31 24 3521373
Email: HansRutger.Bosker@mpi.nl

Marjolein Scherphuis (press officer)
Phone: +31 24 3521947
Email: Marjolein.Scherphuis@mpi.nl

Max Planck Institute for Psycholinguistics

Related Hearing Articles:

Why you lose hearing for a while after listening to loud sounds
When we listen to loud sounds, our hearing may become impaired for a short time.
Older people who use hearing aids still report hearing challenges
A high proportion of older people with hearing aids, especially those with lower incomes, report having trouble hearing and difficulty accessing hearing care services, according to a study from researchers at Johns Hopkins Bloomberg School of Public Health.
Hearing class
New study finds that the class of neurons responsible for transmitting information from the inner ear to the brain is composed of three molecularly distinct subtypes.
Hearing tests on wild whales
Scientists published the first hearing tests on a wild population of healthy marine mammals.
Genes critical for hearing identified
Fifty-two previously unidentified genes that are critical for hearing have been found by testing over 3,000 mouse genes.
Is your partner's hearing loss driving you mad?
New research by academics at the University of Nottingham has suggested that the impact of a person's hearing loss on their nearest and dearest should be considered when personalizing rehabilitation plans for patients with deafness.
No significant change seen in hearing loss among US teens
Although there was an increase in the percentage of US youth ages 12 to 19 reporting exposure to loud music through headphones from 1988-2010, researchers did not find significant changes in the prevalence of hearing loss among this group, according to a study published by JAMA Otolaryngology-Head & Neck Surgery.
Certain OTC, less expensive hearing aids provide benefit similar to conventional hearing aid
A comparison between less-expensive, over-the-counter hearing assistance devices and a conventional hearing aid found that some of these devices were associated with improvements in hearing similar to the hearing aid, according to a study published by JAMA.
This fly's incredible hearing is a curiosity to those developing better hearing aids
U of T Scarborough biologists study fly to develop better hearing aids.
Gene may hold key to hearing recovery
Researchers have discovered that a protein implicated in human longevity may also play a role in restoring hearing after noise exposure.
More Hearing News and Hearing Current Events

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Rethinking Anger
Anger is universal and complex: it can be quiet, festering, justified, vengeful, and destructive. This hour, TED speakers explore the many sides of anger, why we need it, and who's allowed to feel it. Guests include psychologists Ryan Martin and Russell Kolts, writer Soraya Chemaly, former talk radio host Lisa Fritsch, and business professor Dan Moshavi.
Now Playing: Science for the People

#538 Nobels and Astrophysics
This week we start with this year's physics Nobel Prize awarded to Jim Peebles, Michel Mayor, and Didier Queloz and finish with a discussion of the Nobel Prizes as a way to award and highlight important science. Are they still relevant? When science breakthroughs are built on the backs of hundreds -- and sometimes thousands -- of people's hard work, how do you pick just three to highlight? Join host Rachelle Saunders and astrophysicist, author, and science communicator Ethan Siegel for their chat about astrophysics and Nobel Prizes.