Nav: Home

Army scientists improve human-agent teaming by making AI agents more transparent

January 11, 2018

U.S. Army Research Laboratory scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency, which refers to a robot, unmanned vehicle, or software agent's ability to convey to humans its intent, performance, future plans, and reasoning process.

"As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust in the systems and make appropriate decisions," explained ARL's Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with 'low observability, predictability, directability and auditability' as well as 'low mutual understanding of common goals' being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL's experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators' decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human's decision making and thus the overall human-agent team performance. More specifically, researchers said the human's trust in the agent was significantly better calibrated -- accepting the agent's plan when it is correct and rejecting it when it is incorrect-- when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen's group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM's user interface features an at a glance transparency module where user-tested iconographic representations of the agent's plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM's user interface have investigated the effects of agent transparency on the human teammate's situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project's findings, demonstrated the positive effects of agent transparency on the human's task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

"Bidirectional transparency, although conceptually straightforward--human and agent being mutually transparent about their reasoning process--can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent's planning and performance--just as agent transparency can support the human's situation awareness and task performance, which we have demonstrated in our studies," Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.
-end-
This work was supported by the U.S. Department of Defense Autonomy Research Pilot Initiative (Program Manager Dr. DH Kim).

ARL scientists described this research in a publication that will appear in print in May 2018: Chen, Jessie Y.C., Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. "Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness." Theoretical Issues in Ergonomics Science (May 2018). (DOI 10.1080/1463922X.2017.1315750)

The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to provide innovative research, development and engineering to produce capabilities that provide decisive overmatch to the Army against the complexities of the current and future operating environments in support of the joint warfighter and the nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command

U.S. Army Research Laboratory

Related Decision Making Articles:

Brain: How to optimize decision making?
Our brains are constantly faced with different choices. Why is it so difficult to make up our mind when faced with two or more choices?
How do social networks shape political decision-making?
New research shows that social media's influence on voting goes beyond bots and foreign interference.
What are you looking at? How attention affects decision-making
Scientists using eye-tracking technology have found that what we look at helps guide our decisions when faced with two visible choices, such as snack food options.
Casino lights and sounds encourage risky decision-making
The blinking lights and exciting jingles in casinos may encourage risky decision-making and potentially promote problem gambling behaviour, suggests new research from the University of British Columbia.
Education improves decision-making ability, study finds
A new study led by Hyuncheol Bryant Kim, assistant professor of policy analysis and management at Cornell University, found that education can be leveraged to help enhance an individual's economic decision-making quality or economic rationality.
More Decision Making News and Decision Making Current Events

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Rethinking Anger
Anger is universal and complex: it can be quiet, festering, justified, vengeful, and destructive. This hour, TED speakers explore the many sides of anger, why we need it, and who's allowed to feel it. Guests include psychologists Ryan Martin and Russell Kolts, writer Soraya Chemaly, former talk radio host Lisa Fritsch, and business professor Dan Moshavi.
Now Playing: Science for the People

#537 Science Journalism, Hold the Hype
Everyone's seen a piece of science getting over-exaggerated in the media. Most people would be quick to blame journalists and big media for getting in wrong. In many cases, you'd be right. But there's other sources of hype in science journalism. and one of them can be found in the humble, and little-known press release. We're talking with Chris Chambers about doing science about science journalism, and where the hype creeps in. Related links: The association between exaggeration in health related science news and academic press releases: retrospective observational study Claims of causality in health news: a randomised trial This...