Bluesky Facebook Reddit Email

Explanations of artificial intelligence: Author proposes model that highlights evidence of fairness

02.21.23 | Carnegie Mellon University

Apple iPad Pro 11-inch (M4)

Apple iPad Pro 11-inch (M4) runs demanding GIS, imaging, and annotation workflows on the go for surveys, briefings, and lab notebooks.

Artificial intelligence (AI) is used in a variety of ways, such as building new kinds of credit scores that go beyond the traditional FICO score. However, while these tools can powerfully and accurately predict outcomes, their internal operations are often difficult to explain and interpret. As a result, there is a growing demand in ethics and regulation for what is called explainable AI (xAI), especially in high-stakes domains.

In a new article , a professor at Carnegie Mellon University (CMU) suggests that explanations of AI are valuable to those affected by a model’s decisions if they can provide evidence that a past adverse decision was unfair. The article is published in Frontiers in Psychology for a special issue on AI in Business.

“Recently, legislators in the United States and the European Union have tried to pass laws regulating automated systems, including explainability,” says Derek Leben , Associate Teaching Professor of Ethics at the Tepper School of Business, who authored the article. “There are several existing laws that impose legal requirements for explainability, especially with respect to credit and lending, but they are often difficult to interpret when it comes to AI.”

In response to demands for explainability, researchers have produced a large set of xAI methods in a short period of time. These methods differ in the type of explanations they can generate, so Leben says we must now ask: What type of explanations are important for an xAI method to produce?

In the article, Leben identifies three types of explanations.

While there has been much debate about what type of explanation is most important, Leben supports xAI methods that provide information about counterfactual changes to past states based on what he calls the evidence of fairness view. In this view, individuals affected by a model’s decisions (model patients) can and should care about explainability as a means to an end, with the end verifying that a past decision treated them fairly.

Counterfactual explanations can provide people with evidence that a past decision was fair in two ways. The first is to demonstrate that a model would have produced a beneficial decision under alternative conditions that are under the model patient’s control (which the author calls positive evidence of fairness). The second is to show that a model would not have produced a beneficial decision when irrelevant behavioral or group attributes are altered (which Leben terms negative evidence of fairness).

Put another way, Leben suggests that xAI methods should be capable of demonstrating that a decision was counterfactually dependent on features that were under the applicant’s control (e.g., late payments) and not counterfactually dependent on features that are discriminatory (e.g., race and gender).

Leben says his work has practical implications. Not only can these ideas inform legislative efforts and industry norms around explainability, but they can also be used in other domains. For example, engineers designing AI models and their associated xAI methods can use the evidence of fairness view to help evaluate them.

Frontiers in Psychology

10.3389/fpsyg.2023.1069426

Explainable AI as evidence of fair decisions

14-Feb-2023

Keywords

Article Information

Contact Information

Caitlin Kizielewicz
Carnegie Mellon University
ckiz@andrew.cmu.edu

Source

How to Cite This Article

APA:
Carnegie Mellon University. (2023, February 21). Explanations of artificial intelligence: Author proposes model that highlights evidence of fairness. Brightsurf News. https://www.brightsurf.com/news/8OM06Z21/explanations-of-artificial-intelligence-author-proposes-model-that-highlights-evidence-of-fairness.html
MLA:
"Explanations of artificial intelligence: Author proposes model that highlights evidence of fairness." Brightsurf News, Feb. 21 2023, https://www.brightsurf.com/news/8OM06Z21/explanations-of-artificial-intelligence-author-proposes-model-that-highlights-evidence-of-fairness.html.