In politics and pandemics, trolls use fear, anger to drive clicks

March 26, 2020

Facebook users flipping through their feeds in the fall of 2016 faced a minefield of targeted advertisements pitting blacks against police, southern whites against immigrants, gun owners against Obama supporters and the LGBTQ community against the conservative right.

Placed by distant Russian trolls, they didn't aim to prop up one candidate or cause, but to turn Americans against one another.

The ads were cheaply made and full of threatening, vulgar language.

And, according to a sweeping new analysis of more than 2,500 of the ads, they were remarkably effective, eliciting clickthrough rates as much as nine times higher than what is typical in digital advertising.

"We found that fear and anger appeals work really well in getting people to engage," said lead author Chris Vargo, an assistant professor of Advertising, Public Relations and Media Design at University of Colorado Boulder.

The study, published this week in Journalism and Mass Communication Quarterly, is the first to take a comprehensive look at ads placed by the infamous Russian propaganda machine known as the Internet Research Agency (IRA) and ask: How effective were they? And what makes people click on them?

While focused on ads running in 2016, the study's findings resonate in the age of COVID-19 and the run-up to the 2020 election, the authors say.

"As consumers continue to see ads that contain false claims and are intentionally designed to use their emotions to manipulate them, it's important for them to have cool heads and understand the motives behind them," said Vargo.

For the study, Vargo and assistant professor of advertising Toby Hopp scoured 2,517 Facebook and Instagram ads downloaded from the U.S. House of Representatives Permanent Select Committee On Intelligence website. The committee made the ads publicly available in 2018 after concluding that the IRA had been creating fake U.S. personas, setting up fake social media pages, and using targeted paid advertising to "sow discord" among U.S. residents.

Using computational tools and manual coding, Vargo and Hopp analyzed every ad, looking for the inflammatory, obscene or threatening words and language hostile to a particular group's ethnic, religious or sexual identity. They also looked at which groups each ad targeted, how many clicks the ad got, and how much the IRA paid.

Collectively, the IRA spent about $75,000 to generate about 40.5 million impressions with about 3.7 million users clicking on them - a clickthrough rate of 9.2%.

That compares to between .9% and 1.8% for a typical digital ad.

While ads using blatantly racist language didn't do well, those using cuss words and inflammatory words (like "sissy," "idiot," "psychopath" and "terrorist") or posing a potential threat did. Ads that evoked fear and anger did the best.

One IRA advertisement targeting users with an interest in the Black Lives Matter movement stated: "They killed an unarmed guy again! We MUST make the cops stop thinking that they are above the law!" Another shouted: "White supremacists are planning to raise the racist flag again!" Meanwhile, ads targeting people who sympathized with white conservative groups read "Take care of our vets; not illegals" or joked "If you voted for Obama: We don't want your business because you are too stupid to own a firearm."

Only 110 out of 2,000 mentioned Donald Trump.

"This wasn't about electing one candidate or another," said Vargo. "It was essentially a make-Americans-hate-each-other campaign."

The ads were often unsophisticated, with spelling or grammatical errors and poorly photoshopped images. Yet at only a few cents to distribute, the IRA got an impressive rate of return.

"I was shocked at how effective these appeals were," said Vargo.

The authors warn that they have no doubt such troll farms are still at it.

According to some news reports, Russian trolls are already engaged in disinformation campaigns around COVID-19.

"I think with any major story, you are going to see this kind of disinformation circulated," said Hopp. "There are bad actors out there who have goals that are counter to the aspirational goals of American democracy, and there are plenty of opportunities for them to take advantage of the current structure of social media."

Ultimately, the authors believe better monitoring, via both machine algorithms and human reviewers, could help stem the tide of disinformation.

"We as a society need to start seriously talking about what role the platforms and government should play in times like the 2020 election or during COVID-19 when we have a compelling need for high-quality, accurate information to be distributed," said Hopp.
-end-


University of Colorado at Boulder

Related Social Media Articles from Brightsurf:

it's not if, but how people use social media that impacts their well-being
New research from UBC Okanagan indicates what's most important for overall happiness is how a person uses social media.

Social media postings linked to hate crimes
A new paper in the Journal of the European Economic Association, published by Oxford University Press, explores the connection between social media and hate crimes.

How Steak-umm became a social media phenomenon during the pandemic
A new study outlines how a brand of frozen meat products took social media by storm - and what other brands can learn from the phenomenon.

COVID-19: Social media users more likely to believe false information
A new study led by researchers at McGill University finds that people who get their news from social media are more likely to have misperceptions about COVID-19.

Stemming the spread of misinformation on social media
New research reported in the journal Psychological Science finds that priming people to think about accuracy could make them more discerning in what they subsequently share on social media.

Looking for better customer engagement value? Be more strategic on social media
According to a new study from the University of Vaasa and University of Cyprus, the mere use of social media alone does not generate customer value, but rather, the connections and interactions between the firm and its customers -- as well as among customers themselves -- can be used strategically for resource transformation and exchanges between the interacting parties.

Exploring the use of 'stretchable' words in social media
An investigation of Twitter messages reveals new insights and tools for studying how people use stretched words, such as 'duuuuude,' 'heyyyyy,' or 'noooooooo.' Tyler Gray and colleagues at the University of Vermont in Burlington present these findings in the open-access journal PLOS ONE on May 27, 2020.

How social media platforms can contribute to dehumanizing people
A recent analysis of discourse on Facebook highlights how social media can be used to dehumanize entire groups of people.

Social media influencers could encourage adolescents to follow social distancing guidelines
Public health bodies should consider incentivizing social media influencers to encourage adolescents to follow social distancing guidelines, say researchers.

Social grooming factors influencing social media civility on COVID-19
A new study analyzing tweets about COVID-19 found that users with larger social networks tend to use fewer uncivil remarks when they have more positive responses from others.

Read More: Social Media News and Social Media Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.