'Fake news' isn't easy to spot on Facebook, according to new study

November 05, 2019

AUSTIN, Texas -- With the presidential election season moving into high gear, campaign messaging will soon begin increasing dramatically. But for those of us who get our news from social media, a new study from the McCombs School of Business at The University of Texas at Austin offers a strong warning: You can't trust yourself to discern what's true and what's not when you're on Facebook.

In the study, participants fitted with a wireless electroencephalography headset were asked to read political news headlines presented as they would appear in a Facebook feed and determine their credibility. They assessed only 44% correctly, overwhelmingly selecting headlines that aligned with their own political beliefs as true. The EEG headsets tracked their brain activity during the exercise.

"We all believe that we are better than the average person at detecting fake news, but that's simply not possible," said lead author Patricia Moravec, assistant professor of information, risk and operations management. "The environment of social media and our own biases make us all much worse than we think."

Moravec, along with Randall K. Minas of the University of Hawaii at Manoa and Alan R. Dennis of Indiana University, authored the study, "Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All," published today in Management Information Systems Quarterly.

The researchers worked with 80 social media-proficient undergraduate students who first answered 10 questions about their own political beliefs. Each participant was then fitted with an EEG headset. The students were asked to read 50 political news headlines presented as they would appear in a Facebook feed and assess their credibility. Forty of the headlines were evenly divided between true and false, with 10 headlines that were clearly true included as controls: "Trump Signs New Executive Order on Immigration" (clearly true), "Nominee to Lead EPA Testifies He'll Enforce Environmental Laws" (true), "Russian Spies Present at Trump’s Inauguration -- Seated on Inauguration Platform" (false).

The researchers randomly assigned fake news flags among the 40 noncontrol headlines to see what effect they would have on the participants' responses. In late 2016, Facebook incorporated fact-checking into its platform and began flagging certain news articles by noting that an article was "disputed by third-party fact checkers." The students rated each headline's believability, credibility and truthfulness.

As they worked through the exercise, the participants spent more time and showed significantly more activity in their frontal cortices -- the brain area associated with arousal, memory access and consciousness -- when headlines supported their beliefs but were flagged as false. These reactions of discomfort indicated cognitive dissonance when headlines supporting their beliefs were marked as untrue.

But this dissonance was not enough to make participants change their minds. They overwhelmingly said that headlines conforming with their preexisting beliefs were true, regardless of whether they were flagged as potentially fake. The flag did not change their initial response to the headline, even if it did make them pause a moment longer and study it a bit more carefully.

Political affiliation made no difference in their ability to determine what was true or false. "People's self-reported identity as Democrat or Republican didn't influence their ability to detect fake news," Moravec said. "And it didn't determine how skeptical they were about what's news and what's not."

The Facebook environment, it would seem, muddies the waters between fact and fiction. Unlike when we're at work or focused on a goal, we're unable to think very critically when we're in this passive, pleasure-seeking mindset.

"When we're on social media, we're passively pursuing pleasure and entertainment," Moravec said. "We're avoiding something else."

The experiment showed that social media users are highly subject to confirmation bias, the unintentional tendency to gravitate toward and process information that is consistent with existing beliefs, she said. This can result in decision-making that ignores information that is inconsistent with those beliefs.

"The fact that social media perpetuates and feeds this bias complicates people's ability to make evidence-based decisions," she said. "But if the facts that you do have are polluted by fake news that you truly believe, then the decisions you make are going to be much worse."

For more details about this research, read the McCombs Big Ideas feature story or view the video interview with the lead author.

University of Texas at Austin

Related Social Media Articles from Brightsurf:

it's not if, but how people use social media that impacts their well-being
New research from UBC Okanagan indicates what's most important for overall happiness is how a person uses social media.

Social media postings linked to hate crimes
A new paper in the Journal of the European Economic Association, published by Oxford University Press, explores the connection between social media and hate crimes.

How Steak-umm became a social media phenomenon during the pandemic
A new study outlines how a brand of frozen meat products took social media by storm - and what other brands can learn from the phenomenon.

COVID-19: Social media users more likely to believe false information
A new study led by researchers at McGill University finds that people who get their news from social media are more likely to have misperceptions about COVID-19.

Stemming the spread of misinformation on social media
New research reported in the journal Psychological Science finds that priming people to think about accuracy could make them more discerning in what they subsequently share on social media.

Looking for better customer engagement value? Be more strategic on social media
According to a new study from the University of Vaasa and University of Cyprus, the mere use of social media alone does not generate customer value, but rather, the connections and interactions between the firm and its customers -- as well as among customers themselves -- can be used strategically for resource transformation and exchanges between the interacting parties.

Exploring the use of 'stretchable' words in social media
An investigation of Twitter messages reveals new insights and tools for studying how people use stretched words, such as 'duuuuude,' 'heyyyyy,' or 'noooooooo.' Tyler Gray and colleagues at the University of Vermont in Burlington present these findings in the open-access journal PLOS ONE on May 27, 2020.

How social media platforms can contribute to dehumanizing people
A recent analysis of discourse on Facebook highlights how social media can be used to dehumanize entire groups of people.

Social media influencers could encourage adolescents to follow social distancing guidelines
Public health bodies should consider incentivizing social media influencers to encourage adolescents to follow social distancing guidelines, say researchers.

Social grooming factors influencing social media civility on COVID-19
A new study analyzing tweets about COVID-19 found that users with larger social networks tend to use fewer uncivil remarks when they have more positive responses from others.

Read More: Social Media News and Social Media Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.