RIT/NTID researchers study how deaf and hearing people watch sign language

September 09, 2020

A recent study has shown that readers' eye gaze behaviors are strong indicators of words that are unexpected, new, or difficult to understand. The study by Rain Bosworth, an assistant professor and researcher in the Center for Sensory, Perceptual, and Cognitive Ecology (SPaCE Center) at Rochester Institute of Technology's National Technical Institute for the Deaf, explores the unknown qualities of gaze behavior for "sign watching" and how these are affected by a user's language expertise and intelligibility of the sign input.

According to Bosworth's study, published in the Journal of Deaf Studies and Deaf Education, with NTID graduate Adam Stone, gaze behaviors can provide an index of cognitive effort and knowledge in signers. This study provides the first evidence that novice and fluent signers have different eye gaze behaviors.

Bosworth and her team recorded gaze behaviors in 52 deaf and hearing adults while they watched signed narratives. Highly fluent signers primarily kept a steady gaze on the face and used peripheral vision to perceive the signers' moving hands. The researchers then showed the participants videos of signed stories played backwards. Bosworth said that people who learned American Sign Language earlier in life are better equipped to understand difficult video-reversed narratives. Fluent signers tended to focus strongly on the face when sign watching, even for low intelligibility conditions.

"These low intelligibility conditions simulate what happens in real-world settings when trying to watch live signers on phones with small displays or with weak internet signals," explained Bosworth.

Novice signers, who scored lower on measures of story comprehension, showed a very different gaze pattern.

"Gaze behavior is more scattered for people who recently learned sign language, and this scatter increased for low-intelligibility conditions, probably because observers are looking directly at the moving hands," Bosworth said. "This fits with what we know about research that shows that signers have very good peripheral vision, especially from the lower visual field. Expert signers look at the face and utilize their peripheral vision for catching the fine details of moving handshapes."

But, there is some good news for non-signers. According to Bosworth, it doesn't take long for signers to develop "expert-like" gaze patterns during sign comprehension. Hearing signers who have been signing for at least five years often show steady gaze behavior on the face just like fluent deaf signers.
-end-
A video illustration of the results is available: https://www.youtube.com/watch?v=-8SXhCF1h_U&feature=youtu.be.

Rochester Institute of Technology

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.