How accurate is your AI?

March 14, 2018

Kyoto, Japan -- As AI's role in society continues to expand, J B Brown of the Graduate School of Medicine reports on a new evaluation method for the type of AI that predicts yes/positive/true or no/negative/false answers.

Brown's paper, published in Molecular Informatics, deconstructs the utilization of AI and analyzes the nature of the statistics used to report an AI program's ability. The new technique also generates a probability of the performance level given evaluation data, answering questions such as: What is the probability of achieving accuracy greater than 90%?

Reports of new AI applications appear in the news almost daily, including in society and science, finance, pharmaceuticals, medicine, and security.

"While reported statistics seem impressive, research teams and those evaluating the results come across two problems," explains Brown. "First, to understand if the AI achieved its results by chance, and second, to interpret applicability from the reported performance statistics."

For example, if an AI program is built to predict whether or not someone will win the lottery, it may always predict a loss. The program may achieve '99% accuracy', but interpretation is key to determine the accuracy of the conclusion that the program is accurate.

But herein lies the problem: in typical AI development, the evaluation can only be trusted if there is an equal number of positive and negative results. If the data is biased toward either value, the current system of evaluation will exaggerate the system's ability.

So to tackle this problem, Brown developed a new technique that evaluates performance based on only the input data itself.

"The novelty of this technique is that it doesn't depend on any one type of AI technology, such as deep learning," Brown describes. "It can help develop new evaluation metrics by looking at how a metric interplays with the balance in predicted data. We can then tell if the resulting metrics could be biased."

Brown hopes this analysis will not only raise awareness of how we think about AI in the future, but also that it contributes to the development of more robust AI platforms.

In addition to the accuracy metric, Brown tested six other metrics in both theoretical and applied scenarios, finding that no single metric was universally superior. He says the key to building useful AI platforms is to take a multi-metric view of evaluation.

"AI can assist us in understanding many phenomena in the world, but for it to properly provide us direction, we must know how to ask the right questions. We must be careful not to overly focus on a single number as a measure of an AI's reliability."

Brown's program is freely available to the general public, researchers, and developers.
-end-
The paper "Classifiers and their metrics quantified" appeared 23 January 2018 in the journal Molecular Informatics, with doi: 10.1002/minf.201700127

About Kyoto University

Kyoto University is one of Japan and Asia's premier research institutions, founded in 1897 and responsible for producing numerous Nobel laureates and winners of other prestigious international prizes. A broad curriculum across the arts and sciences at both undergraduate and graduate levels is complemented by numerous research centers, as well as facilities and offices around Japan and the world. For more information please see: http://www.kyoto-u.ac.jp/en

Kyoto University

Related Data Articles from Brightsurf:

Keep the data coming
A continuous data supply ensures data-intensive simulations can run at maximum speed.

Astronomers are bulging with data
For the first time, over 250 million stars in our galaxy's bulge have been surveyed in near-ultraviolet, optical, and near-infrared light, opening the door for astronomers to reexamine key questions about the Milky Way's formation and history.

Novel method for measuring spatial dependencies turns less data into more data
Researcher makes 'little data' act big through, the application of mathematical techniques normally used for time-series, to spatial processes.

Ups and downs in COVID-19 data may be caused by data reporting practices
As data accumulates on COVID-19 cases and deaths, researchers have observed patterns of peaks and valleys that repeat on a near-weekly basis.

Data centers use less energy than you think
Using the most detailed model to date of global data center energy use, researchers found that massive efficiency gains by data centers have kept energy use roughly flat over the past decade.

Storing data in music
Researchers at ETH Zurich have developed a technique for embedding data in music and transmitting it to a smartphone.

Life data economics: calling for new models to assess the value of human data
After the collapse of the blockchain bubble a number of research organisations are developing platforms to enable individual ownership of life data and establish the data valuation and pricing models.

Geoscience data group urges all scientific disciplines to make data open and accessible
Institutions, science funders, data repositories, publishers, researchers and scientific societies from all scientific disciplines must work together to ensure all scientific data are easy to find, access and use, according to a new commentary in Nature by members of the Enabling FAIR Data Steering Committee.

Democratizing data science
MIT researchers are hoping to advance the democratization of data science with a new tool for nonstatisticians that automatically generates models for analyzing raw data.

Getting the most out of atmospheric data analysis
An international team including researchers from Kanazawa University used a new approach to analyze an atmospheric data set spanning 18 years for the investigation of new-particle formation.

Read More: Data News and Data Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.