Nav: Home

Understanding research on how people develop trust in AI can inform its use

April 03, 2020

The use of artificial intelligence (AI), technologies that can interact with the environment and simulate human intelligence, has the potential to significantly change the way we work. Successfully integrating AI into organizations depends on workers' level of trust in the technology. A new review examined two decades of research on how people develop trust in AI. The authors concluded that the way AI is represented, or "embodied," and AI's capabilities contribute to developing trust. They also proposed a framework that addresses the elements that shape users' cognitive and emotional trust in AI, which can help organizations that use it.

The review, by researchers at Carnegie Mellon University and Bar Ilan University, appears in Academy of Management Annals.

"The trust that users develop in AI will be central to determining its role in organizations," explains Anita Williams Woolley, Associate Professor of Organizational Behavior and Theory at Carnegie Mellon University's Tepper School of Business, who coauthored the study. "We addressed the dynamic nature of trust by exploring how trust develops for people interacting with different representations of AI (e.g., robots, virtual agents, or embedded) as well as the features of AI that facilitate the development of trust."

Specifically, the researchers observed the role of tangibility (the capability of being perceived or touched), transparency (the level to which the operating rules and logic of the technology are apparent to users), and reliability (whether the technology exhibits the same expected behavior over time). They also considered task characteristics (how technical versus interpersonal judgments are handled) and immediacy behaviors (socially oriented gestures intended to increase interpersonal closeness, such as active listening and responsiveness). They also looked at anthropomorphism (the perception that technology can have human qualities).

The authors searched Google Scholar for articles on human trust in AI published between 1999 and 2019, identifying about 200 peer-reviewed articles and conference proceedings. Fields represented included organizational behavior, human-computer interactions, robot-human interactions, information systems, information technology, and engineering. They also used three databases to identify an additional 50 articles. In the end, they reviewed approximately 150 articles that presented empirical research on human trust in AI.

The authors found that the representation of AI played an important role in the nature of the cognitive trust people develop. For robotic AI, the trajectory for developing trust resembled that of creating trust in human relationships, starting low and increasing after more experience. But for virtual and embedded AI, the opposite occurred: High initial trust declined following experience.

The authors also found that the level of machine intelligence characterizing AI may moderate the development of cognitive trust, with a high level of intelligence leading to higher trust following use and experience. For robotic AI, a high level of machine intelligence generally led to faster development of a high level of trust. For virtual and embedded AI, high machine intelligence offered the possibility of maintaining the initial high levels of trust. Transparency was also an important factor for establishing cognitive trust in virtual and embedded AI, though the relationship between reliability and the development of trust in AI was complex.

Anthropomorphism was uniquely important for the development of emotional trust, but its effect differed depending on the form of AI. For virtual AI, anthropomorphism had a positive effect. For robotic AI, effects were mixed: People tended to like anthropomorphic robots more than mechanical-looking robots, but these human-like robots could also evoke discomfort and a sense of eeriness.

Factors that influenced emotional trust differed from those that influenced cognitive trust, and some factors may have had different implications for each, the authors concluded.

As a guide to integrating AI into organizations' work, the authors proposed a framework. They considered the form in which AI was used, the level of machine intelligence, behaviors such as responsiveness, and reliability as factors that influenced how people developed trust in AI, both cognitively and emotionally.

"Trust can predict the level of reliance on technology, while the level of correspondence between someone's trust and the capabilities of the technology, known as calibration, can influence how the technology is used," says Ella Glikson, Assistant Professor in the Graduate School of Business Administration at Bar Ilan University, who coauthored the study.
-end-
The research was funded by DARPA.

Carnegie Mellon University

Related Robots Articles:

Robots popular with older adults
A new study by psychologists from the University of Jena (Germany) does not confirm that robot skepticism among elder people is often suspected in science.
Showing robots how to do your chores
By observing humans, robots learn to perform complex tasks, such as setting a table.
Designing better nursing care with robots
Robots are becoming an increasingly important part of human care, according to researchers based in Japan.
Darn you, R2! When can we blame robots?
A recent study finds that people are likely to blame robots for workplace accidents, but only if they believe the robots are autonomous.
Robots need a new philosophy to get a grip
Robots need to know the reason why they are doing a job if they are to effectively and safely work alongside people in the near future.
How can robots land like birds?
Birds can perch on a wide variety of surfaces, thick or thin, rough or slick.
Soft robots for all
Each year, soft robots gain new abilities. They can jump, squirm, and grip.
The robots that dementia caregivers want: robots for joy, robots for sorrow
A team of scientists spent six months co-designing robots with informal caregivers for people with dementia, such as family members.
Faster robots demoralize co-workers
A Cornell University-led team has found that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort -- and they tend to dislike the robots.
Increasing skepticism against robots
In Europe, people are more reserved regarding robots than they were five years ago.
More Robots News and Robots Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: Reinvention
Change is hard, but it's also an opportunity to discover and reimagine what you thought you knew. From our economy, to music, to even ourselves–this hour TED speakers explore the power of reinvention. Guests include OK Go lead singer Damian Kulash Jr., former college gymnastics coach Valorie Kondos Field, Stockton Mayor Michael Tubbs, and entrepreneur Nick Hanauer.
Now Playing: Science for the People

#562 Superbug to Bedside
By now we're all good and scared about antibiotic resistance, one of the many things coming to get us all. But there's good news, sort of. News antibiotics are coming out! How do they get tested? What does that kind of a trial look like and how does it happen? Host Bethany Brookeshire talks with Matt McCarthy, author of "Superbugs: The Race to Stop an Epidemic", about the ins and outs of testing a new antibiotic in the hospital.
Now Playing: Radiolab

Dispatch 6: Strange Times
Covid has disrupted the most basic routines of our days and nights. But in the middle of a conversation about how to fight the virus, we find a place impervious to the stalled plans and frenetic demands of the outside world. It's a very different kind of front line, where urgent work means moving slow, and time is marked out in tiny pre-planned steps. Then, on a walk through the woods, we consider how the tempo of our lives affects our minds and discover how the beats of biology shape our bodies. This episode was produced with help from Molly Webster and Tracie Hunte. Support Radiolab today at Radiolab.org/donate.