Fake news feels less immoral to share when we've seen it before

December 03, 2019

People who repeatedly encounter a fake news item may feel less and less unethical about sharing it on social media, even when they don't believe the information, research indicates.

In a series of experiments involving more than 2,500 people, Daniel A. Effron, a London Business School associate professor of organizational behavior, and Medha Raj, a PhD student at the University of Southern California, found that seeing a fake headline just once leads individuals to temper their disapproval of the misinformation when they see it a second, third, or fourth time.

The findings, published in Psychological Science, have important implications for policymakers and social media companies trying to curb the spread of misinformation online, Effron says.

"We suggest that efforts to fight misinformation should consider how people judge the morality of spreading it, not just whether they believe it," he says.

Across five experiments, Effron and Raj asked online survey participants to rate how unethical or acceptable they thought it would be to publish a fake headline, and how likely they would be to "like", share, and block or unfollow the person who posted it.

As they expected, the researchers found that participants rated headlines they had seen more than once as less unethical to publish than headlines they were shown for the first time. Participants also said they were more likely to "like" and share a previously seen headline and less likely to block or unfollow the person who posted it. What's more, they did not rate previously seen headline as significantly more accurate than new ones.

"Thus, our main results cannot be explained by a tendency to misremember false headlines as true," the researchers write.

Effron and Raj note that efforts to curtail misinformation typically focus on helping people distinguish fact from fiction. Facebook, for example, has tried informing users when they try to share news that fact-checkers have flagged as false. But such strategies may fail if users feel more comfortable sharing misinformation they know is fake when they have seen it before.

The researchers theorize that repeating misinformation lends it a "ring of truthfulness" that can increase people's tendency to give it a moral pass, regardless of whether they believe it. Merely imagining misinformation as if it were true can have a similar effect. Effron's earlier research shows that people are more likely to excuse a blatant falsehood after imagining how it could have been true if the past had been different.

"The results should be of interest to citizens of contemporary democracies," Effron adds. "Misinformation can stoke political polarization and undermine democracy, so it is important for people to understand when and why it spreads."
-end-
All materials for this research have been made publicly available via the Open Science Framework. This article has received badges for Open Materials and Pre-registration.

For more information about this study, please contact Daniel Effron at deffron@london.edu

For a copy of the article "Misinformation and Morality: Encountering Fake-News Headlines Makes Them Seem Less Unethical to Publish and Share" and access to other research findings published in Psychological Science, please contact Scott Sleek at 202-293-9300 or ssleek@psychologicalscience.org.

Association for Psychological Science

Related Social Media Articles from Brightsurf:

it's not if, but how people use social media that impacts their well-being
New research from UBC Okanagan indicates what's most important for overall happiness is how a person uses social media.

Social media postings linked to hate crimes
A new paper in the Journal of the European Economic Association, published by Oxford University Press, explores the connection between social media and hate crimes.

How Steak-umm became a social media phenomenon during the pandemic
A new study outlines how a brand of frozen meat products took social media by storm - and what other brands can learn from the phenomenon.

COVID-19: Social media users more likely to believe false information
A new study led by researchers at McGill University finds that people who get their news from social media are more likely to have misperceptions about COVID-19.

Stemming the spread of misinformation on social media
New research reported in the journal Psychological Science finds that priming people to think about accuracy could make them more discerning in what they subsequently share on social media.

Looking for better customer engagement value? Be more strategic on social media
According to a new study from the University of Vaasa and University of Cyprus, the mere use of social media alone does not generate customer value, but rather, the connections and interactions between the firm and its customers -- as well as among customers themselves -- can be used strategically for resource transformation and exchanges between the interacting parties.

Exploring the use of 'stretchable' words in social media
An investigation of Twitter messages reveals new insights and tools for studying how people use stretched words, such as 'duuuuude,' 'heyyyyy,' or 'noooooooo.' Tyler Gray and colleagues at the University of Vermont in Burlington present these findings in the open-access journal PLOS ONE on May 27, 2020.

How social media platforms can contribute to dehumanizing people
A recent analysis of discourse on Facebook highlights how social media can be used to dehumanize entire groups of people.

Social media influencers could encourage adolescents to follow social distancing guidelines
Public health bodies should consider incentivizing social media influencers to encourage adolescents to follow social distancing guidelines, say researchers.

Social grooming factors influencing social media civility on COVID-19
A new study analyzing tweets about COVID-19 found that users with larger social networks tend to use fewer uncivil remarks when they have more positive responses from others.

Read More: Social Media News and Social Media Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.