Nav: Home

New method peeks inside the 'black box' of artificial intelligence

October 31, 2018

Artificial intelligence--specifically, machine learning--is a part of daily life for computer and smartphone users. From autocorrecting typos to recommending new music, machine learning algorithms can help make life easier. They can also make mistakes.

It can be challenging for computer scientists to figure out what went wrong in such cases. This is because many machine learning algorithms learn from information and make their predictions inside a virtual "black box," leaving few clues for researchers to follow.

A group of computer scientists at the University of Maryland has developed a promising new approach for interpreting machine learning algorithms. Unlike previous efforts, which typically sought to "break" the algorithms by removing key words from inputs to yield the wrong answer, the UMD group instead reduced the inputs to the bare minimum required to yield the correct answer. On average, the researchers got the correct answer with an input of less than three words.

In some cases, the researchers' model algorithms provided the correct answer based on a single word. Frequently, the input word or phrase appeared to have little obvious connection to the answer, revealing important insights into how some algorithms react to specific language. Because many algorithms are programmed to give an answer no matter what--even when prompted by a nonsensical input--the results could help computer scientists build more effective algorithms that can recognize their own limitations.

The researchers will present their work on November 4, 2018 at the 2018 Conference on Empirical Methods in Natural Language Processing.

"Black-box models do seem to work better than simpler models, such as decision trees, but even the people who wrote the initial code can't tell exactly what is happening," said Jordan Boyd-Graber, the senior author of the study and an associate professor of computer science at UMD. "When these models return incorrect or nonsensical answers, it's tough to figure out why. So instead, we tried to find the minimal input that would yield the correct result. The average input was about three words, but we could get it down to a single word in some cases."

In one example, the researchers entered a photo of a sunflower and the text-based question, "What color is the flower?" as inputs into a model algorithm. These inputs yielded the correct answer of "yellow." After rephrasing the question into several different shorter combinations of words, the researchers found that they could get the same answer with "flower?" as the only text input for the algorithm.

In another, more complex example, the researchers used the prompt, "In 1899, John Jacob Astor IV invested $100,000 for Tesla to further develop and produce a new lighting system. Instead, Tesla used the money to fund his Colorado Springs experiments."

They then asked the algorithm, "What did Tesla spend Astor's money on?" and received the correct answer, "Colorado Springs experiments." Reducing this input to the single word "did" yielded the same correct answer.

The work reveals important insights about the rules that machine learning algorithms apply to problem solving. Many real-world issues with algorithms result when an input that makes sense to humans results in a nonsensical answer. By showing that the opposite is also possible--that nonsensical inputs can also yield correct, sensible answers--Boyd-Graber and his colleagues demonstrate the need for algorithms that can recognize when they answer a nonsensical question with a high degree of confidence.

"The bottom line is that all this fancy machine learning stuff can actually be pretty stupid," said Boyd-Graber, who also has co-appointments at the University of Maryland Institute for Advanced Computer Studies (UMIACS) as well as UMD's College of Information Studies and Language Science Center. "When computer scientists train these models, we typically only show them real questions or real sentences. We don't show them nonsensical phrases or single words. The models don't know that they should be confused by these examples."

Most algorithms will force themselves to provide an answer, even with insufficient or conflicting data, according to Boyd-Graber. This could be at the heart of some of the incorrect or nonsensical outputs generated by machine learning algorithms--in model algorithms used for research, as well as real-world algorithms that help us by flagging spam email or offering alternate driving directions. Understanding more about these errors could help computer scientists find solutions and build more reliable algorithms.

"We show that models can be trained to know that they should be confused," Boyd-Graber said. "Then they can just come right out and say, 'You've shown me something I can't understand.'"
-end-
In addition to Boyd-Graber, UMD-affiliated researchers involved with this work include undergraduate researcher Eric Wallace; graduate students Shi Feng and Pedro Rodriguez; and former graduate student Mohit Iyyer (M.S. '14, Ph.D. '17, computer science).

The research presentation, "Pathologies of Neural Models Make Interpretation Difficult," Shi Feng, Eric Wallace, Alvin Grissom II, Pedro Rodriguez, Mohit Iyyer, and Jordan Boyd-Graber, will be presented at the 2018 Conference on Empirical Methods in Natural Language Processing on November 4, 2018.

This work was supported by the Defense Advanced Research Projects Agency (Award No. HR0011-15-C-011) and the National Science Foundation (Award No. IIS1652666). The content of this article does not necessarily reflect the views of these organizations.

Media Relations Contact: Matthew Wright, 301-405-9267, mewright@umd.edu

University of Maryland
College of Computer, Mathematical, and Natural Sciences
2300 Symons Hall
College Park, MD 20742
http://www.cmns.umd.edu
@UMDscience

About the College of Computer, Mathematical, and Natural Sciences

The College of Computer, Mathematical, and Natural Sciences at the University of Maryland educates more than 9,000 future scientific leaders in its undergraduate and graduate programs each year. The college's 10 departments and more than a dozen interdisciplinary research centers foster scientific discovery with annual sponsored research funding exceeding $175 million.

University of Maryland

Related Algorithms Articles:

Lightning fast algorithms can lighten the load of 3D hologram generation
Tokyo, Japan - Researchers from Tokyo Metropolitan University have developed a new way of calculating simple holograms for heads-up displays (HUDs) and near-eye displays (NEDs).
Synergy emergence in deep reinforcement motor learning
Human motor control has always been efficient at executing complex movements naturally, efficiently, and without much thought involved.
Machine learning could improve the diagnosis of mastitis infections in cows
Artificial intelligence could help vets to more accurately diagnose the origin of mastitis on dairy herds, according to a new study from experts at the University of Nottingham.
How a new quantum approach can develop faster algorithms to deduce complex networks
Complex networks are ubiquitous in the real world, from artificial to purely natural ones, and they exhibit very similar geometric properties.
Algorithms 'consistently' more accurate than people in predicting recidivism, study says
In a study with potentially far-reaching implications for criminal justice in the United States, a team of California researchers has found that algorithms are significantly more accurate than humans in predicting which defendants will later be arrested for a new crime.
AI for #MeToo: Training algorithms to spot online trolls
Machine learning could be a powerful tool for allowing social media platforms to spot online trolls.
Developing a new AI breast cancer diagnostic tool
Scientists are developing a new way to identify the unique chemical 'fingerprints' for different types of breast cancers.
Artificial intelligence-based algorithm for intensive care of traumatic brain injury
A recent Finnish study, published in Scientific Reports, presents the first artificial intelligence (AI) based algorithm that may be utilized in the intensive care unit for treating patients with severe traumatic brain injury.
New algorithms train AI to avoid specific bad behaviors
Robots, self-driving cars and other intelligent machines could become better-behaved if machine-learning designers adopt a new framework for building AI with safeguards against specific undesirable outcomes.
New machine learning algorithms offer safety and fairness guarantees
Writing in Science, Thomas and his colleagues Yuriy Brun, Andrew Barto and graduate student Stephen Giguere at UMass Amherst, Bruno Castro da Silva at the Federal University of Rio Grande del Sol, Brazil, and Emma Brunskill at Stanford University this week introduce a new framework for designing machine learning algorithms that make it easier for users of the algorithm to specify safety and fairness constraints.
More Algorithms News and Algorithms Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Processing The Pandemic
Between the pandemic and America's reckoning with racism and police brutality, many of us are anxious, angry, and depressed. This hour, TED Fellow and writer Laurel Braitman helps us process it all.
Now Playing: Science for the People

#568 Poker Face Psychology
Anyone who's seen pop culture depictions of poker might think statistics and math is the only way to get ahead. But no, there's psychology too. Author Maria Konnikova took her Ph.D. in psychology to the poker table, and turned out to be good. So good, she went pro in poker, and learned all about her own biases on the way. We're talking about her new book "The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win".
Now Playing: Radiolab

Invisible Allies
As scientists have been scrambling to find new and better ways to treat covid-19, they've come across some unexpected allies. Invisible and primordial, these protectors have been with us all along. And they just might help us to better weather this viral storm. To kick things off, we travel through time from a homeless shelter to a military hospital, pondering the pandemic-fighting power of the sun. And then, we dive deep into the periodic table to look at how a simple element might actually be a microbe's biggest foe. This episode was reported by Simon Adler and Molly Webster, and produced by Annie McEwen and Pat Walters. Support Radiolab today at Radiolab.org/donate.