Nav: Home

In emergencies, should you trust a robot?

February 29, 2016

In emergencies, people may trust robots too much for their own safety, a new study suggests. In a mock building fire, test subjects followed instructions from an "Emergency Guide Robot" even after the machine had proven itself unreliable - and after some participants were told that robot had broken down.

The research was designed to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency. But the researchers were surprised to find that the test subjects followed the robot's instructions - even when the machine's behavior should not have inspired trust.

The research, believed to be the first to study human-robot trust in an emergency situation, is scheduled to be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand.

"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI). "In our studies, test subjects followed the robot's directions even to the point where it might have put them in danger had this been a real emergency."

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words "Emergency Guide Robot" on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot - which was controlled by a hidden researcher - led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke - and the robot, which was then brightly-lit with red LEDs and white "arms" that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway - marked with exit signs - that had been used to enter the building.

"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn't follow it during the simulated emergency," said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. "Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."

The researchers surmise that in the scenario they studied, the robot may have become an "authority figure" that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

"These are just the type of human-robot experiments that we as roboticists should be investigating," said Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering. "We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human."

Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. In those cases, some subjects still followed the robot's instructions even when it directed them toward a darkened room that was blocked by furniture.

In future research, the scientists hope to learn more about why the test subjects trusted the robot, whether that response differs by education level or demographics, and how the robots themselves might indicate the level of trust that should be given to them.

The research is part of a long-term study of how humans trust robots, an important issue as robots play a greater role in society. The researchers envision using groups of robots stationed in high-rise buildings to point occupants toward exits and urge them to evacuate during emergencies. Research has shown that people often don't leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favor of more familiar building entrances.

But in light of these findings, the researchers are reconsidering the questions they should ask.

"We wanted to ask the question about whether people would be willing to trust these rescue robots," said Wagner. "A more important question now might be to ask how to prevent them from trusting these robots too much."

Beyond emergency situations, there are other issues of trust in human-robot relationships, said Robinette.

"Would people trust a hamburger-making robot to provide them with food?" he asked. "If a robot carried a sign saying it was a 'child-care robot,' would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma's house? We don't know why people trust or don't trust machines."
-end-
In addition to those already mentioned, the research included Wenchen Li and Robert Allen, graduate research assistants in Georgia Tech's College of Computing.

Support for this research was provided by the Linda J. and Mark C. Smith Chair in Bioengineering, and the Air Force Office of Scientific Research (AFOSR) under contract FA9550-13-1-0169. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the AFOSR.

CITATION: Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard and Alan R. Wagner, "Overtrust of Robots in Emergency Evacuation Scenarios," (2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016).

Georgia Institute of Technology

Related Robots Articles:

Tactile sensor gives robots new capabilities
Eight years ago, Ted Adelson's research group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.
Researchers question if banning of 'killer robots' actually will stop robots from killing
A University at Buffalo research team has published a paper that implies that the rush to ban and demonize autonomous weapons or 'killer robots' may be a temporary solution, but the actual problem is that society is entering into a situation where systems like these have and will become possible.
Soft robots that mimic human muscles
An EPFL team is developing soft, flexible and reconfigurable robots.
Team of robots learns to work together, without colliding
When you have too many robots together, they get so focused on not colliding with each other that they eventually just stop moving.
Social robots -- programmable by everyone
The startup LuxAI was created following a research project at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg.
On the path toward molecular robots
Scientists at Hokkaido University have developed light-powered molecular motors that repetitively bend and unbend, bringing us closer to molecular robots.
Gentle strength for robots
A soft actuator using electrically controllable membranes could pave the way for machines that are no danger to humans.
Robots get creative to cut through clutter
Clutter is a special challenge for robots, but new Carnegie Mellon University software is helping robots cope, whether they're beating a path across the moon or grabbing a milk jug from the back of the refrigerator.
Humans can empathize with robots
Toyohashi Tech researchers in cooperation with researchers at Kyoto University have presented the first neurophysiological evidence of humans' ability to empathize with a robot in perceived pain.
Giving robots a more nimble grasp
Engineers at MIT have now hit upon a way to impart more dexterity to simple robotic grippers: using the environment as a helping hand.

Related Robots Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Moving Forward
When the life you've built slips out of your grasp, you're often told it's best to move on. But is that true? Instead of forgetting the past, TED speakers describe how we can move forward with it. Guests include writers Nora McInerny and Suleika Jaouad, and human rights advocate Lindy Lou Isonhood.
Now Playing: Science for the People

#527 Honey I CRISPR'd the Kids
This week we're coming to you from Awesome Con in Washington, D.C. There, host Bethany Brookshire led a panel of three amazing guests to talk about the promise and perils of CRISPR, and what happens now that CRISPR babies have (maybe?) been born. Featuring science writer Tina Saey, molecular biologist Anne Simon, and bioethicist Alan Regenberg. A Nobel Prize winner argues banning CRISPR babies won’t work Geneticists push for a 5-year global ban on gene-edited babies A CRISPR spin-off causes unintended typos in DNA News of the first gene-edited babies ignited a firestorm The researcher who created CRISPR twins defends...