Nav: Home

Gestures and visual animations reveal cognitive origins of linguistic meaning

April 25, 2019

Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content "on the fly"--even if it is not linguistic in nature.

These conclusions stem from two studies, one in linguistics and the other in experimental psychology, appearing in Natural Language & Linguistic Theory and Proceedings of the National Academy of Sciences (PNAS).

"These results suggest that far less is encoded in words than was originally thought," explains Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France's National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University, who wrote the first study and co-authored the second. "Rather, our mind has a 'meaning engine' that can apply to linguistic and non-linguistic material alike.

"Taken together, these findings provide new insights into the cognitive origins of linguistic meaning."

Contemporary linguistics has established that language conveys information through a highly articulated typology of inferences. For instance, I have a dog asserts that I own a dog, but it also suggests (or "implicates") that I have no more than one: the hearer assumes that if I had two dogs, I would have said so (as I have two dogs is more informative).

Unlike asserted content, implicated content isn't targeted by negation. I don't have a dog thus means that I don't have any dog, not that I don't have exactly one dog. There are further inferential types characterized by further properties: the sentence I spoil my dog still conveys that I have a dog, but now this is neither asserted nor implicated; rather, it is "presupposed"--i.e. taken for granted in the conversation. Unlike asserted and implicated information, presuppositions are preserved in negative statements, and thus I don't spoil my dog still presupposes that I have a dog.

A fundamental question of contemporary linguistics is: Which of these inferences come from arbitrary properties of words stored in our mental dictionary and which result from general, productive processes?

In the Natural Language & Linguistic Theory work and the PNAS study, written by Lyn Tieu of Australia's Western Sydney University, Schlenker, and CNRS's Emmanuel Chemla, the authors argue that nearly all inferential types result from general, and possibly non-linguistic, processes.

Their conclusion is based on an understudied type of sentence containing gestures that replace normal words. For instance, in the sentence You should UNSCREW-BULB, the capitalized expression encodes a gesture of unscrewing a bulb from the ceiling. While the gesture may be seen for the first time (and thus couldn't be stored in our mental dictionary), it is understood due to its visual content.

This makes it possible to test how its informational content (i.e. unscrewing a bulb that's on the ceiling) is divided on the fly among the typology of inferences. In this case, the unscrewing action is asserted, but the presence of a bulb on the ceiling is presupposed, as shown by the fact that the negation (You shouldn't UNSCREW-BULB) preserves this information. By systematically investigating such gestures, the Natural Language & Linguistic Theory study reaches a ground-breaking conclusion: nearly all inferential types (eight in total) can be generated on the fly, suggesting that all are due to productive processes.

The PNAS study investigates four of these inferential types with experimental methods, confirming the results of the linguistic study. But it also goes one step further by replacing the gestures with visual animations embedded in written texts, thus answering two new questions: First, can the results be reproduced for visual stimuli that subjects cannot possibly have seen in a linguistic context, given that people routinely speak with gestures but not with visual animations? Second, can entirely non-linguistic material be structured by the same processes?

Both answers are positive.

In a series of experiments, approximately 100 subjects watched videos of sentences in which some words were replaced either by gestures or by visual animations. They were asked how strongly they derived various inferences that are the hallmarks of different inferential types (for instance, inferences derived in the presence of negation). The subjects' judgments displayed the characteristic signature of four classic inferential types (including presuppositions and implicated content) in gestures but also in visual animations: the informational content of these non-standard expressions was, as expected, divided on the fly by the experiments' subjects among well-established slots of the inferential typology.
-end-
Natural Language & Linguistic Theory paper: https://rdcu.be/bb7yF

PNAS paper: https://www.pnas.org/lookup/doi/10.1073/pnas.1821018116

New York University

Related Gestures Articles:

Economists find mixed values of 'thoughts and prayers'
Christians who suffer from natural and human-caused disasters value thoughts and prayers from religious strangers, while atheists and agnostics believe they are worse off from such gestures.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Gestures and visual animations reveal cognitive origins of linguistic meaning
Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content 'on the fly' -- even if it is not linguistic in nature.
Artificial intelligence enables recognizing and assessing a violinist's bow movements
In playing music, gestures are extremely important, in part because they are directly related to the sound and the expressiveness of the musicians.
Telling stories using rhythmic gesture helps children improve their oral skills
For the first time it has been shown that a brief training session with rhythmic gestures has immediate benefits for narrative discourse in children of 5 and 6 years of age in a study published recently in Developmental Psychology led by Pilar Prieto, ICREA research professor and coordinator of the Prosodic Studies Group and of the Department of Translation and Language Sciences, together with her collaborators, Ingrid Vilà-Giménez and Alfonso Igualada (Cognition and Language Research Group, Open University of Catalonia).
Force Push VR brings Jedi powers to life
Force Push provides a more physical, nuanced experience than traditional hand controllers allow in VR.
Clues that suggest people are lying may be deceptive, study shows
The verbal and physical signs of lying are harder to detect than people believe, a study suggests.
Scientists unlock secret of how the brain encodes speech
People like the late Stephen Hawking are unable to speak because their muscles are paralyzed.
The art of storytelling: researchers explore why we relate to characters
For thousands of years, humans have relied on storytelling to engage, to share emotions and to relate personal experiences.
Despite negative consequences, benevolent sexism helps in search for mate
Some women may like it when a man opens the door on a first date or offers to pay the bill at dinner, while others may find the gestures insulting.
More Gestures News and Gestures Current Events

Top Science Podcasts

We have hand picked the top science podcasts of 2019.
Now Playing: TED Radio Hour

In & Out Of Love
We think of love as a mysterious, unknowable force. Something that happens to us. But what if we could control it? This hour, TED speakers on whether we can decide to fall in — and out of — love. Guests include writer Mandy Len Catron, biological anthropologist Helen Fisher, musician Dessa, One Love CEO Katie Hood, and psychologist Guy Winch.
Now Playing: Science for the People

#541 Wayfinding
These days when we want to know where we are or how to get where we want to go, most of us will pull out a smart phone with a built-in GPS and map app. Some of us old timers might still use an old school paper map from time to time. But we didn't always used to lean so heavily on maps and technology, and in some remote places of the world some people still navigate and wayfind their way without the aid of these tools... and in some cases do better without them. This week, host Rachelle Saunders...
Now Playing: Radiolab

Dolly Parton's America: Neon Moss
Today on Radiolab, we're bringing you the fourth episode of Jad's special series, Dolly Parton's America. In this episode, Jad goes back up the mountain to visit Dolly's actual Tennessee mountain home, where she tells stories about her first trips out of the holler. Back on the mountaintop, standing under the rain by the Little Pigeon River, the trip triggers memories of Jad's first visit to his father's childhood home, and opens the gateway to dizzying stories of music and migration. Support Radiolab today at Radiolab.org/donate.