Nav: Home

Gestures and visual animations reveal cognitive origins of linguistic meaning

April 25, 2019

Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content "on the fly"--even if it is not linguistic in nature.

These conclusions stem from two studies, one in linguistics and the other in experimental psychology, appearing in Natural Language & Linguistic Theory and Proceedings of the National Academy of Sciences (PNAS).

"These results suggest that far less is encoded in words than was originally thought," explains Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France's National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University, who wrote the first study and co-authored the second. "Rather, our mind has a 'meaning engine' that can apply to linguistic and non-linguistic material alike.

"Taken together, these findings provide new insights into the cognitive origins of linguistic meaning."

Contemporary linguistics has established that language conveys information through a highly articulated typology of inferences. For instance, I have a dog asserts that I own a dog, but it also suggests (or "implicates") that I have no more than one: the hearer assumes that if I had two dogs, I would have said so (as I have two dogs is more informative).

Unlike asserted content, implicated content isn't targeted by negation. I don't have a dog thus means that I don't have any dog, not that I don't have exactly one dog. There are further inferential types characterized by further properties: the sentence I spoil my dog still conveys that I have a dog, but now this is neither asserted nor implicated; rather, it is "presupposed"--i.e. taken for granted in the conversation. Unlike asserted and implicated information, presuppositions are preserved in negative statements, and thus I don't spoil my dog still presupposes that I have a dog.

A fundamental question of contemporary linguistics is: Which of these inferences come from arbitrary properties of words stored in our mental dictionary and which result from general, productive processes?

In the Natural Language & Linguistic Theory work and the PNAS study, written by Lyn Tieu of Australia's Western Sydney University, Schlenker, and CNRS's Emmanuel Chemla, the authors argue that nearly all inferential types result from general, and possibly non-linguistic, processes.

Their conclusion is based on an understudied type of sentence containing gestures that replace normal words. For instance, in the sentence You should UNSCREW-BULB, the capitalized expression encodes a gesture of unscrewing a bulb from the ceiling. While the gesture may be seen for the first time (and thus couldn't be stored in our mental dictionary), it is understood due to its visual content.

This makes it possible to test how its informational content (i.e. unscrewing a bulb that's on the ceiling) is divided on the fly among the typology of inferences. In this case, the unscrewing action is asserted, but the presence of a bulb on the ceiling is presupposed, as shown by the fact that the negation (You shouldn't UNSCREW-BULB) preserves this information. By systematically investigating such gestures, the Natural Language & Linguistic Theory study reaches a ground-breaking conclusion: nearly all inferential types (eight in total) can be generated on the fly, suggesting that all are due to productive processes.

The PNAS study investigates four of these inferential types with experimental methods, confirming the results of the linguistic study. But it also goes one step further by replacing the gestures with visual animations embedded in written texts, thus answering two new questions: First, can the results be reproduced for visual stimuli that subjects cannot possibly have seen in a linguistic context, given that people routinely speak with gestures but not with visual animations? Second, can entirely non-linguistic material be structured by the same processes?

Both answers are positive.

In a series of experiments, approximately 100 subjects watched videos of sentences in which some words were replaced either by gestures or by visual animations. They were asked how strongly they derived various inferences that are the hallmarks of different inferential types (for instance, inferences derived in the presence of negation). The subjects' judgments displayed the characteristic signature of four classic inferential types (including presuppositions and implicated content) in gestures but also in visual animations: the informational content of these non-standard expressions was, as expected, divided on the fly by the experiments' subjects among well-established slots of the inferential typology.
-end-
Natural Language & Linguistic Theory paper: https://rdcu.be/bb7yF

PNAS paper: https://www.pnas.org/lookup/doi/10.1073/pnas.1821018116

New York University

Related Gestures Articles:

Children improve their narrative performance with the help of rhythmic gestures
Gesture is an integral part of language development. Recent studies carried out by the same authors in collaboration with other members of the Prosodic Studies Group (GrEP) coordinated by Pilar Prieto, ICREA research professor Department of Translation and Language Sciences at UPF, have shown that when the speaker accompanies oral communication with rhythmic gesture, preschool children are observed to better understand the message and improve their oral skills.
Gestures heard as well as seen
Gesturing with the hands while speaking is a common human behavior, but no one knows why we do it.
Oink, oink makes the pig
In a new study, neuroscientists at TU Dresden demonstrated that the use of gestures and pictures makes foreign language teaching in primary schools more effective and sustainable.
New dog, old tricks? Stray dogs can understand human cues
Pet dogs are highly receptive to commands from their owners.
Sport-related concussions
Concussions are a regular occurrence in sport but more so in contact sports such as American football, ice hockey or soccer.
Economists find mixed values of 'thoughts and prayers'
Christians who suffer from natural and human-caused disasters value thoughts and prayers from religious strangers, while atheists and agnostics believe they are worse off from such gestures.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Gestures and visual animations reveal cognitive origins of linguistic meaning
Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content 'on the fly' -- even if it is not linguistic in nature.
Telling stories using rhythmic gesture helps children improve their oral skills
For the first time it has been shown that a brief training session with rhythmic gestures has immediate benefits for narrative discourse in children of 5 and 6 years of age in a study published recently in Developmental Psychology led by Pilar Prieto, ICREA research professor and coordinator of the Prosodic Studies Group and of the Department of Translation and Language Sciences, together with her collaborators, Ingrid Vilà-Giménez and Alfonso Igualada (Cognition and Language Research Group, Open University of Catalonia).
Force Push VR brings Jedi powers to life
Force Push provides a more physical, nuanced experience than traditional hand controllers allow in VR.
More Gestures News and Gestures Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: Meditations on Loneliness
Original broadcast date: April 24, 2020. We're a social species now living in isolation. But loneliness was a problem well before this era of social distancing. This hour, TED speakers explore how we can live and make peace with loneliness. Guests on the show include author and illustrator Jonny Sun, psychologist Susan Pinker, architect Grace Kim, and writer Suleika Jaouad.
Now Playing: Science for the People

#565 The Great Wide Indoors
We're all spending a bit more time indoors this summer than we probably figured. But did you ever stop to think about why the places we live and work as designed the way they are? And how they could be designed better? We're talking with Emily Anthes about her new book "The Great Indoors: The Surprising Science of how Buildings Shape our Behavior, Health and Happiness".
Now Playing: Radiolab

The Third. A TED Talk.
Jad gives a TED talk about his life as a journalist and how Radiolab has evolved over the years. Here's how TED described it:How do you end a story? Host of Radiolab Jad Abumrad tells how his search for an answer led him home to the mountains of Tennessee, where he met an unexpected teacher: Dolly Parton.Jad Nicholas Abumrad is a Lebanese-American radio host, composer and producer. He is the founder of the syndicated public radio program Radiolab, which is broadcast on over 600 radio stations nationwide and is downloaded more than 120 million times a year as a podcast. He also created More Perfect, a podcast that tells the stories behind the Supreme Court's most famous decisions. And most recently, Dolly Parton's America, a nine-episode podcast exploring the life and times of the iconic country music star. Abumrad has received three Peabody Awards and was named a MacArthur Fellow in 2011.