IU cognitive scientists ID new mechanism at heart of early childhood learning and social behavior

November 13, 2013

BLOOMINGTON, Ind. -- Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners -- in this case, 1-year-olds and their parents -- to coordinate the process of joint attention, a key component of parent-child communication and early language learning.

Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In "Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination," published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.

The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.

"Currently, interventions consist of training children to look at the other's face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other's hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another's hands to follow the other's lead, not just gaze."

The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone's hands. In other situations, it may be more useful to follow the other's gaze.

"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."

Like Google Glass, which records the views of those wearing it, researchers used innovative head-mounted eye-tracking technology that has never been used before with young children, the researchers recorded moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab. The researchers applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.

"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."
-end-
For a copy of the embargoed study, contact Liz Rosdeitcher at 812-855-4507 or rosdeitc@indiana.edu. This work was funded by grants from the National Science Foundation and the National Institutes of Health.

Here are videos that show dual eye-tracking in parent-child free-flowing play, http://www.indiana.edu/~dll/video/dual_eye_tracking_example_2.mov; a researcher putting a head-mounted eye-tracker on an infant, http://www.indiana.edu/~dll/video/putting_eye_tracker_on_infant.mov; and the calibration of an infant's eye-tracker, http://www.indiana.edu/~dll/video/calibrating_infant_eye_tracker.mov.

Yu is director of the Computational Cognition and Learning Lab. Smith, Chancellor's Professor, is director of the Cognitive Development Lab. The Department of Psychological and Brain Sciences is part of the College of Arts and Sciences. To speak with Yu or Smith, contact Rosdeitcher. For additional assistance, contact Tracy James at 812-855-0084 or traljame@iu.edu.

Indiana University

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.