Robots sense human touch using camera and shadows

February 08, 2021

ITHACA, N.Y. - Soft robots may not be in touch with human feelings, but they are getting better at feeling human touch.

Cornell University researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot's skin and classifies them with machine-learning software.

The group's paper, "ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification," published in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. The paper's lead author is doctoral student, Yuhan Hu.

The new ShadowSense technology is the latest project from the Human-Robot Collaboration and Companionship Lab, led by the paper's senior author, Guy Hoffman, associate professor in the Sibley School of Mechanical and Aerospace Engineering.

The technology originated as part of an effort to develop inflatable robots that could guide people to safety during emergency evacuations. Such a robot would need to be able to communicate with humans in extreme conditions and environments. Imagine a robot physically leading someone down a noisy, smoke-filled corridor by detecting the pressure of the person's hand.

Rather than installing a large number of contact sensors - which would add weight and complex wiring to the robot, and would be difficult to embed in a deforming skin - the team took a counterintuitive approach. In order to gauge touch, they looked to sight.

"By placing a camera inside the robot, we can infer how the person is touching it and what the person's intent is just by looking at the shadow images," Hu said. "We think there is interesting potential there, because there are lots of social robots that are not able to detect touch gestures."

The prototype robot consists of a soft inflatable bladder of nylon skin stretched around a cylindrical skeleton, roughly four feet in height, that is mounted on a mobile base. Under the robot's skin is a USB camera, which connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish between six touch gestures - touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all - with an accuracy of 87.5 to 96%, depending on the lighting.

The robot can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker. And the robot's skin has the potential to be turned into an interactive screen.

By collecting enough data, a robot could be trained to recognize an even wider vocabulary of interactions, custom-tailored to fit the robot's task, Hu said.

The robot doesn't even have to be a robot. ShadowSense technology can be incorporated into other materials, such as balloons, turning them into touch-sensitive devices.

In addition to providing a simple solution to a complicated technical challenge, and making robots more user-friendly to boot, ShadowSense offers a comfort that is increasingly rare in these high-tech times: privacy.

"If the robot can only see you in the form of your shadow, it can detect what you're doing without taking high fidelity images of your appearance," Hu said. "That gives you a physical filter and protection, and provides psychological comfort."
-end-
The research was supported by the National Science Foundation's National Robotic Initiative.

Cornell University

Related Gestures Articles from Brightsurf:

Guiding light: Skoltech technology puts a light-painting drone at your fingertips
Skoltech researchers have designed and developed an interface that allows a user to direct a small drone to light-paint patterns or letters through hand gestures.

​NTU Singapore scientists develop artificial intelligence system for high precision recognition of hand gestures
Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an Artificial Intelligence (AI) system that recognises hand gestures by combining skin-like electronics with computer vision.

Children improve their narrative performance with the help of rhythmic gestures
Gesture is an integral part of language development. Recent studies carried out by the same authors in collaboration with other members of the Prosodic Studies Group (GrEP) coordinated by Pilar Prieto, ICREA research professor Department of Translation and Language Sciences at UPF, have shown that when the speaker accompanies oral communication with rhythmic gesture, preschool children are observed to better understand the message and improve their oral skills.

Gestures heard as well as seen
Gesturing with the hands while speaking is a common human behavior, but no one knows why we do it.

Oink, oink makes the pig
In a new study, neuroscientists at TU Dresden demonstrated that the use of gestures and pictures makes foreign language teaching in primary schools more effective and sustainable.

New dog, old tricks? Stray dogs can understand human cues
Pet dogs are highly receptive to commands from their owners.

Sport-related concussions
Concussions are a regular occurrence in sport but more so in contact sports such as American football, ice hockey or soccer.

Economists find mixed values of 'thoughts and prayers'
Christians who suffer from natural and human-caused disasters value thoughts and prayers from religious strangers, while atheists and agnostics believe they are worse off from such gestures.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Gestures and visual animations reveal cognitive origins of linguistic meaning
Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content 'on the fly' -- even if it is not linguistic in nature.

Read More: Gestures News and Gestures Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.