New tool uses AI to flag fake news for media fact-checkers

December 16, 2019

A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories.

The tool, developed by researchers at the University of Waterloo, uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.

"If they are, great, it's probably a real story," said Alexander Wong, a professor of systems design engineering at Waterloo. "But if most of the other material isn't supportive, it's a strong indication you're dealing with fake news."

Researchers were motivated to develop the tool by the proliferation of online posts and news stories that are fabricated to deceive or mislead readers, typically for political or economic gain.

Their system advances ongoing efforts to develop fully automated technology capable of detecting fake news by achieving 90 per cent accuracy in a key area of research known as stance detection.

Given a claim in one post or story and other posts and stories on the same subject that have been collected for comparison, the system can correctly determine if they support it or not nine out of 10 times.

That is a new benchmark for accuracy by researchers using a large dataset created for a 2017 scientific competition called the Fake News Challenge.

While scientists around the world continue to work towards a fully automated system, the Waterloo technology could be used as a screening tool by human fact-checkers at social media and news organizations.

"It augments their capabilities and flags information that doesn't look quite right for verification," said Wong, a founding member of the Waterloo Artificial Intelligence Institute. "It isn't designed to replace people, but to help them fact-check faster and more reliably."

AI algorithms at the heart of the system were shown tens of thousands of claims paired with stories that either supported or didn't support them. Over time, the system learned to determine support or non-support itself when shown new claim-story pairs.

"We need to empower journalists to uncover truth and keep us informed," said Chris Dulhanty, a graduate student who led the project. "This represents one effort in a larger body of work to mitigate the spread of disinformation."
-end-
Graduate students Jason Deglint and Ibrahim Ben Daya also collaborated with Dulhanty and Wong, a Canada Research Chair in Artificial Intelligence and Medical Imaging.

A paper on their work, Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection, was presented this month at the Conference on Neural Information Processing Systems in Vancouver.

University of Waterloo

Related Social Media Articles from Brightsurf:

it's not if, but how people use social media that impacts their well-being
New research from UBC Okanagan indicates what's most important for overall happiness is how a person uses social media.

Social media postings linked to hate crimes
A new paper in the Journal of the European Economic Association, published by Oxford University Press, explores the connection between social media and hate crimes.

How Steak-umm became a social media phenomenon during the pandemic
A new study outlines how a brand of frozen meat products took social media by storm - and what other brands can learn from the phenomenon.

COVID-19: Social media users more likely to believe false information
A new study led by researchers at McGill University finds that people who get their news from social media are more likely to have misperceptions about COVID-19.

Stemming the spread of misinformation on social media
New research reported in the journal Psychological Science finds that priming people to think about accuracy could make them more discerning in what they subsequently share on social media.

Looking for better customer engagement value? Be more strategic on social media
According to a new study from the University of Vaasa and University of Cyprus, the mere use of social media alone does not generate customer value, but rather, the connections and interactions between the firm and its customers -- as well as among customers themselves -- can be used strategically for resource transformation and exchanges between the interacting parties.

Exploring the use of 'stretchable' words in social media
An investigation of Twitter messages reveals new insights and tools for studying how people use stretched words, such as 'duuuuude,' 'heyyyyy,' or 'noooooooo.' Tyler Gray and colleagues at the University of Vermont in Burlington present these findings in the open-access journal PLOS ONE on May 27, 2020.

How social media platforms can contribute to dehumanizing people
A recent analysis of discourse on Facebook highlights how social media can be used to dehumanize entire groups of people.

Social media influencers could encourage adolescents to follow social distancing guidelines
Public health bodies should consider incentivizing social media influencers to encourage adolescents to follow social distancing guidelines, say researchers.

Social grooming factors influencing social media civility on COVID-19
A new study analyzing tweets about COVID-19 found that users with larger social networks tend to use fewer uncivil remarks when they have more positive responses from others.

Read More: Social Media News and Social Media Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.