Seeing isn't required to gesture like a native speaker

March 21, 2016

People the world over gesture when they talk, and they tend to gesture in certain ways depending on the language they speak. Findings from a new study including blind and sighted participants suggest that these gestural variations do not emerge from watching other speakers make the gestures, but from learning the language itself.

"Adult speakers who are blind from birth also gesture when they talk, and these gestures resemble the gestures of sighted adults speaking the same language. This is quite interesting, since blind speakers cannot be learning these language-specific gestures by watching other speakers gesture," explains psychological scientist and lead researcher Seyda Özçaliskan of Georgia State University.

The findings are published in Psychological Science, a journal of the Association for Psychological Science.

While research had shown that speakers of different languages used gestures in different ways, the origin of these differences was not clear. Özçaliskan and colleagues Ché Lucero and Susan Goldin-Meadow of the University of Chicago realized that they might be able to answer the question by comparing the gestures produced by sighted and congenitally blind individuals who speak the same language.

If people learn to gesture by watching other speakers of the same language, they hypothesized, then individuals who are blind from birth would not produce gestures similar to those of sighted speakers. But if people learn to gesture as a function of learning the language itself, then blind and sighted individuals who speak the same language would gesture in similar ways.

The researchers decided to focus specifically on gestures related to motion across space, which tend to show considerable variation across languages. English speakers, for example, typically combine both the manner of motion (e.g., running) and the path of motion (e.g., entering) into a single gesture. Turkish speakers, on the other hand, produce separate gestures to indicate manner and path.

Özçaliskan and colleagues recruited 40 congenitally blind adults -- 20 native English speakers and 20 native Turkish speakers--to participate in the study. They also recruited 40 sighted speakers of each language.

The participants were presented with three-dimensional dioramas that contained a series of figurines depicting motion across space. Some of the scenes showed a figure making a path to a landmark (e.g., running into a house), some showed the figure making a path over a landmark (e.g., flipping over a beam), and others showed a figure making a path from a landmark (e.g., running away from a motorcycle).

Participants explored the scene, using their hands to touch and feel the components; they were told that although the figurine appeared three times in the scene, they should think of her movement as representing a single continuous motion. The participants were then asked to describe the scene.

The results showed that speakers' patterns of gestures diverged according to the language they spoke. Regardless of whether they were sighted or blind, Turkish speakers produced more separated sentence units -- in both speech and gesture -- compared to English speakers. And sighted and blind English speakers produced more conflated sentence units in their speech and gestures than did Turkish speakers.

"We now know that blind speakers do not all gesture in the same generic way," Goldin-Meadow explains. "Rather, their gestures resemble those of other speakers of the same language."

While the study focused on speech and gesture in English and Turkish, the researchers note that these two languages represent a broader pattern in the world's languages. When it comes to expressing motion in space, Dutch, Swedish, Russian, Icelandic, and Serbo-Croatian are similar to English, while French, Spanish, Hebrew, Japanese cluster with Turkish.

"Together, our findings show that gestures that are produced with speech carry the imprint of the language that they accompany even in the absence of access to native gesture patterns, marking speech as the source of cross-linguistic variation in gesture," Özçalışkan and Goldin-Meadow conclude.
-end-
This work was supported by Grant 12-FY08-160 from the March of Dimes Foundation to S. Özçaliskan and S. Goldin-Meadow.

For more information about this study, please contact: Seyda Özçaliskan at seyda@gsu.edu.

The article abstract is available online: http://pss.sagepub.com/content/early/2016/03/14/0956797616629931.abstract

For a copy of the article "Is Seeing Gesture Necessary to Gesture Like a Native Speaker?" and access to other Psychological Science research findings, please contact Anna Mikulak at 202-293-9300 or amikulak@psychologicalscience.org.

Association for Psychological Science

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.