Nav: Home

Gestures improve communication -- even with robots

April 04, 2016

In the world of robot communication, it seems actions speak louder than words. Scientists in the UK have discovered that by getting robot avatars to "talk with their hands," we understand them as well as we do our fellow human beings.

Avatars have been in existence since the 1980s and today are used by millions of people across the globe. They are big business too: from artificial intelligence to social media and psychotherapy to high-end video games, they are used to sell things, to solve problems, to teach us and to entertain us. As avatars become more sophisticated, and their use in society grows, research is focusing on how to improve communication with them. Getting your message across with your avatar is more important than ever, and learning how to improve this communication is a big deal.

Scientists Paul Bremner and Ute Leonards took on this challenge in a recent study published in Frontiers in Psychology. They built their study around the hypothesis that if avatars were to use "iconic" hand gestures together with speech, we would understand them more easily. Iconic gestures have a distinct meaning, like opening a door or a book, and using gestures together with speech is known as "multi-modal communication." The aim of the study was to discover if people could understand avatars performing multi-modal communication as well as they could a human actor. The study also investigated if multi-modal communication by an avatar was more understandable than speech alone.

To test their theory, the scientists filmed an actor reading out a series of phrases whilst performing specific iconic gestures. They then filmed an avatar using these recorded phrases and mimicking the gestures. Films of both the actor and avatar were then shown to the experiment participants, who had to identify what the human and avatar were trying to communicate. The research was a success: the scientists were able to prove that multi-modal communication by avatars is indeed more understandable than speech alone. Not only that, but when using multi-modal communication, we understand them as well as we do humans.

Getting avatars to talk with their hands in the same way that humans do was a challenge in itself. Whilst performing the gestures, the actor used state-of-the-art technology. His movements were tracked using a Microsoft Kinect sensor so that his arm gestures could be recorded as data. The avatar used this data to mimic his gestures. The equipment did have some limitations however; the avatar does not have the same hand shape or degree of movement available as a human - something the pair plans to work on in the future.

Despite the limitations, the scientists' research showed that their method of translating human gestures to an avatar was successful. More importantly, they are confident that the avatar gestures, when used with speech, are as easily understood as from a human. Now that this is established, the pair plans to carry out more research in the field. Future work will involve looking at more types of gestures and in different settings, plus how to make the translation of gestures from human to avatar more efficient. There will be plenty of work to keep them going - they have yet to take on different cultures, and Italy, a nation of people famed for expressive hand gestures, is still on the horizon.
-end-


Frontiers

Related Speech Articles:

New genes linked to severe childhood speech disorder
An international study has discovered nine new genes linked to the most severe type of childhood speech disorder, apraxia.
How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.
Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.
Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.
How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.
New findings on human speech recognition at TU Dresden
Neuroscientists at TU Dresden were able to prove that speech recognition in humans begins in the sensory pathways from the ear to the cerebral cortex and not, as previously assumed, exclusively in the cerebral cortex itself.
Babbling babies' behavior changes parents' speech
New research shows baby babbling changes the way parents speak to their infants, suggesting that infants are shaping their own learning environments.
Hearing through your fingers: Device that converts speech
A novel study published in Restorative Neurology and Neuroscience provides the first evidence that a simple and inexpensive non-invasive speech-to-touch sensory substitution device has the potential to improve hearing in hearing-impaired cochlear implant patients, as well as individuals with normal hearing, to better discern speech in various situations like learning a second language or trying to deal with the 'cocktail party effect.' The device can provide immediate multisensory enhancement without any training.
AI can detect depression in a child's speech
A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people.
Synthetic speech generated from brain recordings
A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract -- an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx.
More Speech News and Speech Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: Meditations on Loneliness
Original broadcast date: April 24, 2020. We're a social species now living in isolation. But loneliness was a problem well before this era of social distancing. This hour, TED speakers explore how we can live and make peace with loneliness. Guests on the show include author and illustrator Jonny Sun, psychologist Susan Pinker, architect Grace Kim, and writer Suleika Jaouad.
Now Playing: Science for the People

#565 The Great Wide Indoors
We're all spending a bit more time indoors this summer than we probably figured. But did you ever stop to think about why the places we live and work as designed the way they are? And how they could be designed better? We're talking with Emily Anthes about her new book "The Great Indoors: The Surprising Science of how Buildings Shape our Behavior, Health and Happiness".
Now Playing: Radiolab

The Third. A TED Talk.
Jad gives a TED talk about his life as a journalist and how Radiolab has evolved over the years. Here's how TED described it:How do you end a story? Host of Radiolab Jad Abumrad tells how his search for an answer led him home to the mountains of Tennessee, where he met an unexpected teacher: Dolly Parton.Jad Nicholas Abumrad is a Lebanese-American radio host, composer and producer. He is the founder of the syndicated public radio program Radiolab, which is broadcast on over 600 radio stations nationwide and is downloaded more than 120 million times a year as a podcast. He also created More Perfect, a podcast that tells the stories behind the Supreme Court's most famous decisions. And most recently, Dolly Parton's America, a nine-episode podcast exploring the life and times of the iconic country music star. Abumrad has received three Peabody Awards and was named a MacArthur Fellow in 2011.