Nav: Home

Researchers look to add statistical safeguards to data analysis and visualization software

May 19, 2017

PROVIDENCE, R.I. [Brown University] -- Modern data visualization software makes it easy for users to explore large datasets in search of interesting correlations and new discoveries. But that ease of use -- the ability to ask question after question of a dataset with just a few mouse clicks -- comes with a serious pitfall: it increases the likelihood of making false discoveries.

At issue is what statisticians refer to as "multiple hypothesis error." The problem is essentially this: the more questions someone asks of a dataset, they more likely one is to stumble upon something that looks like a real discovery but is actually just a random fluctuation in the dataset.

A team of researchers from Brown University is working on software to help combat that problem. This week at the SIGMOD2017 conference in Chicago, they presented a new system called QUDE, which adds real-time statistical safeguards to interactive data exploration systems to help reduce false discoveries.

"More and more people are using data exploration software like Tableau and Spark, but most of those users aren't experts in statistics or machine learning," said Tim Kraska, an assistant professor of computer science at Brown and a co-author of the research. "There are a lot of statistical mistakes you can make, so we're developing techniques that help people avoid them."

Multiple hypothesis testing error is a well-known issue in statistics. In the era of big data and interactive data exploration, the issue has come to a renewed prominence Kraska says.

"These tools make it so easy to query data," he said. "You can easily test 100 hypotheses in an hour using these visualization tools. Without correcting for multiple hypothesis error, the chances are very good that you're going to come across a correlation that's completely bogus."

There are well-known statistical techniques for dealing with the problem. Most of those techniques involve adjusting the level of statistical significance required to validate a particular hypothesis based on how many hypotheses have been tested in total. As the number of hypothesis tests increases, the significance level needed to judge a finding as valid increases as well.

But these correction techniques are nearly all after-the-fact adjustments. They're tools that are used at the end of a research project after all the hypothesis testing is complete, which is not ideal for real-time, interactive data exploration.

"We don't want to wait until the end of a session to tell people if their results are valid," said Eli Upfal, a computer science professor at Brown and research co-author. "We also don't want to have the system reverse itself by telling you at one point in a session that something is significant only to tell you later -- after you've tested more hypotheses -- that your early result isn't significant anymore."

Both of those scenarios are possible using the most common multiple hypothesis correction methods. So the researchers developed a different method for this project that enables them to monitor the risk of false discovery as hypothesis tests are ongoing.

"The idea is that you have a budget of how much false discovery risk you can take, and we update that budget in real time as a user interacts with the data," Upfal said. "We also take into account the ways in which user might explore the data. By understanding the sequence of their questions, we can adapt our algorithm and change the way we allocate the budget."

For users, the experience is similar to using any data visualization software, only with color-coded feedback that gives information about statistical significance.

"Green means that a visualization represents a finding that's significant," Kraska said. "If it's red, that means to be careful; this is on shaky statistical ground."

The system can't guarantee absolute accuracy, the researchers say. No system can. But in a series of user tests using synthetic data for which the real and bogus correlations had been ground-truthed, the researchers showed that the system did indeed reduce the number of false discoveries users made.

The researchers consider this work a step toward a data exploration and visualization system that fully integrates a suite of statistical safeguards.

"Our goal is to make data science more accessible to a broader range of users," Kraska said. "Tackling the multiple hypothesis problem is going to be important, but it's also very difficult to do. We see this paper as a good first step."
-end-


Brown University

Related Data Articles:

Discrimination, lack of diversity, & societal risks of data mining highlighted in big data
A special issue of Big Data presents a series of insightful articles that focus on Big Data and Social and Technical Trade-Offs.
Journal AAS publishes first data description paper: Data collection and sharing
AAS published its first data description paper on June 8, 2017.
73 percent of academics say access to research data helps them in their work; 34 percent do not publish their data
Combining results from bibliometric analyses, a global sample of researcher opinions and case-study interviews, a new report reveals that although the benefits of open research data are well known, in practice, confusion remains within the researcher community around when and how to share research data.
Designing new materials from 'small' data
A Northwestern and Los Alamos team developed a novel workflow combining machine learning and density functional theory calculations to create design guidelines for new materials that exhibit useful electronic properties, such as ferroelectricity and piezoelectricity.
Big data for the universe
Astronomers at Lomonosov Moscow State University in cooperation with their French colleagues and with the help of citizen scientists have released 'The Reference Catalog of galaxy SEDs,' which contains value-added information about 800,000 galaxies.
More Data News and Data Current Events

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Anthropomorphic
Do animals grieve? Do they have language or consciousness? For a long time, scientists resisted the urge to look for human qualities in animals. This hour, TED speakers explore how that is changing. Guests include biological anthropologist Barbara King, dolphin researcher Denise Herzing, primatologist Frans de Waal, and ecologist Carl Safina.
Now Playing: Science for the People

#534 Bacteria are Coming for Your OJ
What makes breakfast, breakfast? Well, according to every movie and TV show we've ever seen, a big glass of orange juice is basically required. But our morning grapefruit might be in danger. Why? Citrus greening, a bacteria carried by a bug, has infected 90% of the citrus groves in Florida. It's coming for your OJ. We'll talk with University of Maryland plant virologist Anne Simon about ways to stop the citrus killer, and with science writer and journalist Maryn McKenna about why throwing antibiotics at the problem is probably not the solution. Related links: A Review of the Citrus Greening...