Nav: Home

Carnegie Mellon transparency reports make AI decision-making accountable

May 25, 2016

PITTSBURGH--Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things, but exactly how usually remains a mystery. Now, new measurement methods developed by Carnegie Mellon University researchers could provide important insights to this process.

Was it a person's age, gender or education level that had the most influence on a decision? Was it a particular combination of factors? CMU's Quantitative Input Influence (QII) measures can provide the relative weight of each factor in the final decision, said Anupam Datta, associate professor of computer science and electrical and computer engineering.

"Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms," Datta said.

"Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited," he continued. "Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports."

These reports might be generated in response to a particular incident -- why an individual's loan application was rejected, or why police targeted an individual for scrutiny or what prompted a particular medical diagnosis or treatment. Or they might be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people.

Datta, along with Shayak Sen, a Ph.D. student in computer science, and Yair Zick, a post-doctoral researcher in the Computer Science Department, will present their report on QII at the IEEE Symposium on Security and Privacy, May 23-25, in San Jose, Calif.

Generating these QII measures requires access to the system, but doesn't necessitate analyzing the code or other inner workings of the system, Datta said. It also requires some knowledge of the input dataset that was initially used to train the machine-learning system.

A distinctive feature of QII measures is that they can explain decisions of a large class of existing machine-learning systems. A significant body of prior work takes a complementary approach, redesigning machine-learning systems to make their decisions more interpretable and sometimes losing prediction accuracy in the process.

QII measures carefully account for correlated inputs while measuring influence. For example, consider a system that assists in hiring decisions for a moving company. Two inputs, gender and the ability to lift heavy weights, are positively correlated with each other and with hiring decisions. Yet transparency into whether the system uses weight-lifting ability or gender in making its decisions has substantive implications for determining if it is engaging in discrimination.

"That's why we incorporate ideas for causal measurement in defining QII," Sen said. "Roughly, to measure the influence of gender for a specific individual in the example above, we keep the weight-lifting ability fixed, vary gender and check whether there is a difference in the decision."

Observing that single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs, such as age and income, on outcomes and the marginal influence of each input within the set. Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled game-theoretic aggregation measures previously applied to measure influence in revenue division and voting.

"To get a sense of these influence measures, consider the U.S. presidential election," Zick said. "California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power."

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that the QII provided better explanations than standard associative measures for a host of scenarios they considered, including sample applications for predictive policing and income prediction.

Now, they are seeking collaboration with industrial partners so that they can employ QII at scale on operational machine-learning systems.
-end-
About Carnegie Mellon University: Carnegie Mellon is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university's seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.

Carnegie Mellon University

Related Decisions Articles:

How neurons use crowdsourcing to make decisions
When many individual neurons collect data, how do they reach a unanimous decision?
Diverse populations make rational collective decisions
Yes/no binary decisions by individual ants can lead to a rational decision as a collective when the individuals have differing preferences to the subject, according to research recently published in the journal Royal Society Open Science.
Understanding decisions: The power of combining psychology and economics
A new paper published in the Proceedings of the National Academy of Sciences shows how collaborations between psychologists and economists lead to better understanding of such decisions than either discipline can on its own.
Trading changes how brain processes selling decisions
Experience in trading changes how the human brain evaluates the sale of goods, muting an economic bias known as the endowment effect in which people demand a higher price to sell a good than they're willing to pay for it.
Modelling how the brain makes complex decisions
Researchers have built the first biologically realistic mathematical model of how the brain plans and learns when faced with a complex decision-making process.
Focus on treatment decisions: Doctor and patient should decide together
This edition of Deutsches Ă„rzteblatt International, which focuses on patient involvement, contains two original articles investigating the following questions: do patients benefit from shared decision making?
Surprise: Your visual cortex is making decisions
The part of the brain responsible for seeing is more powerful than previously believed.
Guam research reveals complications of conservation decisions
A Guam native insect impacts a native tree, posing a conundrum for conservationists.
Researchers determine how groups make decisions
Researchers from Carnegie Mellon University have developed a model that explains how groups make collective decisions when no single member of the group has access to all possible information or the ability to make and communicate a final decision.
Physicians should help families with decisions about end-of-life care
About 20 percent of Americans spend time in an intensive care unit around the time of their death, and most deaths follow a decision to limit life-sustaining therapies.

Related Decisions Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Jumpstarting Creativity
Our greatest breakthroughs and triumphs have one thing in common: creativity. But how do you ignite it? And how do you rekindle it? This hour, TED speakers explore ideas on jumpstarting creativity. Guests include economist Tim Harford, producer Helen Marriage, artificial intelligence researcher Steve Engels, and behavioral scientist Marily Oppezzo.
Now Playing: Science for the People

#524 The Human Network
What does a network of humans look like and how does it work? How does information spread? How do decisions and opinions spread? What gets distorted as it moves through the network and why? This week we dig into the ins and outs of human networks with Matthew Jackson, Professor of Economics at Stanford University and author of the book "The Human Network: How Your Social Position Determines Your Power, Beliefs, and Behaviours".