Deaf children's gesture mismatches provide clues to learning moments

April 05, 2012

In a discovery that could help instructors better teach deaf children, a team of University of Chicago researchers has found that a gesture-sign mismatch made while explaining a math problem suggests that a deaf child is experiencing a teachable moment.

Through a series of experiments with 40 deaf children, ages nine through 12, all of whom were fluent in American Sign Language, researchers were able to distinguish between ASL signs and gestures that look like the gestures hearing children produce when explaining the same math problems.

The deaf students who expressed ideas in gesture that were different from the ideas they expressed in sign were ready to learn to solve the math problems, said UChicago psychologist Susan Goldin-Meadow. In previous work, she had shown that gesture-speech mismatch is a clue to teachable moments in hearing children.

"The juxtaposition of two ideas, one in gesture and the other in sign, highlights their discrepancy, and this discrepancy might be what motivates the student to search for new information in the math lesson," noted Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in Psychology. She authored the paper, "The gestures ASL signers use tell us when they are ready to learn math," published on early view in the journal Cognition.

In the study, the team tested students' understanding of the equals (=) sign through a series of math problems. The researchers coded students' explanations and counted the number of times a child produced a gesture-sign mismatch.

For example, for the problem 7+4+2 = 7+__ , one child signed about how the numbers on the left side of the equation should be added to get the answer (incorrectly, 13), while gesturing about how the number on the right side should be subtracted from that total, which gives the correct answer (6).

Researchers then taught the children a math lesson and retested them. "The more mismatches children produced before the lesson, the more likely they were to improve after the lesson," said Goldin-Meadow. The team found that 65 percent of children who produced three or more mismatches before the lesson were successful after the lesson, compared with 23 percent of children who made fewer than three mismatches.

Educators have long been aware that students go through stages in learning a particular task, and often spontaneously become ready to learn the task. Skillful teachers are able to tune into those moments and recognize them as times to boost the impact of their own instruction, Goldin-Meadow said.

Teachers also frequently use their own gestures to help students learn. They can illustrate how numbers in an equation can be grouped, for example, to help students understand how to make both sides of an equation have the same value.

The ability to use the teacher's gestures is complicated for deaf children because they frequently learn in a classroom with hearing children and get their instruction through an ASL interpreter, who in many cases is looking at the child and not the teacher. As a result, the interpreter does not see the teacher's gestures and cannot relay the information conveyed in those gestures.

The deaf child watching the interpreter will then miss any messages that the teacher sends in gesture and not in speech. "The gestures hearing children see during math instruction are often crucial parts of the lesson, turning children who are not ready to learn into learners," Goldin-Meadow noted.

Deaf children, who frequently have difficulty learning math, could profit from the gestures their teachers make, but only if their interpreters incorporate the information in the teacher's gestures into their own signs and gestures, she said.
Goldin-Meadow was joined in her work by co-authors Aaron Shield, a postdoctoral researcher, and Daniel Lenzen, a research assistant, both at the University of Chicago; and Melissa Herzig, a research assistant, and Carol Padden, Professor of Communication, both at the University of California, San Diego.

The work was supported by grants from the National Institute for Child Health and Human Development and the National Science Foundation, including grants to two of the NSF Science of Learning Centers, one a collaboration of the University of Chicago, Northwestern University and Temple University and another at Gallaudet University.

University of Chicago

Related Gestures Articles from Brightsurf:

Guiding light: Skoltech technology puts a light-painting drone at your fingertips
Skoltech researchers have designed and developed an interface that allows a user to direct a small drone to light-paint patterns or letters through hand gestures.

​NTU Singapore scientists develop artificial intelligence system for high precision recognition of hand gestures
Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an Artificial Intelligence (AI) system that recognises hand gestures by combining skin-like electronics with computer vision.

Children improve their narrative performance with the help of rhythmic gestures
Gesture is an integral part of language development. Recent studies carried out by the same authors in collaboration with other members of the Prosodic Studies Group (GrEP) coordinated by Pilar Prieto, ICREA research professor Department of Translation and Language Sciences at UPF, have shown that when the speaker accompanies oral communication with rhythmic gesture, preschool children are observed to better understand the message and improve their oral skills.

Gestures heard as well as seen
Gesturing with the hands while speaking is a common human behavior, but no one knows why we do it.

Oink, oink makes the pig
In a new study, neuroscientists at TU Dresden demonstrated that the use of gestures and pictures makes foreign language teaching in primary schools more effective and sustainable.

New dog, old tricks? Stray dogs can understand human cues
Pet dogs are highly receptive to commands from their owners.

Sport-related concussions
Concussions are a regular occurrence in sport but more so in contact sports such as American football, ice hockey or soccer.

Economists find mixed values of 'thoughts and prayers'
Christians who suffer from natural and human-caused disasters value thoughts and prayers from religious strangers, while atheists and agnostics believe they are worse off from such gestures.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Gestures and visual animations reveal cognitive origins of linguistic meaning
Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content 'on the fly' -- even if it is not linguistic in nature.

Read More: Gestures News and Gestures Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to