Study: Twitter bots had 'disproportionate' role spreading misinformation in 2016 election

November 20, 2018

BLOOMINGTON, Ind. -- An analysis of information shared on Twitter during the 2016 U.S. presidential election has found that automated accounts -- or "bots" -- played a disproportionate role in spreading misinformation online.

The study, conducted by Indiana University researchers and published Nov. 20 in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017, a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017.

Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the "low-credibility" information on the network. These accounts were also responsible for 34 percent of all articles shared from "low-credibility" sources.

The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral.

The brief length of this time -- 2 to 10 seconds -- highlights the challenges of countering the spread of misinformation online. Similar issues are seen in other complex environments like the stock market, where serious problems can arise in mere moments due to the impact of high-frequency trading.

"This study finds that bots significantly contribute to the spread of misinformation online -- as well as shows how quickly these messages can spread," said Filippo Menczer, a professor in the IU School of Informatics, Computing and Engineering, who led the study.

The analysis also revealed that bots amplify a message's volume and visibility until it's more likely to be shared broadly -- despite only representing a small fraction of the accounts that spread viral messages.

"People tend to put greater trust in messages that appear to originate from many people," said co-author Giovanni Luca Ciampaglia, an assistant research scientist with the IU Network Science Institute at the time of the study. "Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them."

Information sources labeled as low-credibility in the study were identified based upon their appearance on lists produced by independent third-party organizations of outlets that regularly share false or misleading information. These sources -- such as websites with misleading names like "USAToday.com.co" -- include outlets with both right- and left-leaning points of view.

The researchers also identified other tactics for spreading misinformation with Twitter bots. These included amplifying a single tweet -- potentially controlled by a human operator -- across hundreds of automated retweets; repeating links in recurring posts; and targeting highly influential accounts.

For instance, the study cites a case in which a single account mentioned @realDonaldTrump in 19 separate messages about millions of illegal immigrants casting votes in the presidential election -- a false claim that was also a major administration talking point.

The researchers also ran an experiment inside a simulated version of Twitter and found that the deletion of 10 percent of the accounts in the system -- based on their likelihood to be bots -- resulted in a major drop in the number of stories from low-credibility sources in the network.

"This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks," Menczer said.

The study also suggests steps companies could take to slow misinformation spread on their networks. These include improving algorithms to automatically detect bots and requiring a "human in the loop" to reduce automated messages in the system. For example, users might be required to complete a CAPTCHA to send a message.

Although their analysis focused on Twitter, the study's authors added that other social networks are also vulnerable to manipulation. For example, platforms such as Snapchat and WhatsApp may struggle to control misinformation on their networks because their use of encryption and destructible messages complicates the ability to study how their users share information.

"As people across the globe increasingly turn to social networks as their primary source of news and information, the fight against misinformation requires a grounded assessment of the relative impact of the different ways in which it spreads," Menczer said. "This work confirms that bots play a role in the problem -- and suggests their reduction might improve the situation."

To explore election messages currently shared on Twitter, Menczer's research group has also recently launched a tool to measure "Bot Electioneering Volume." Created by IU Ph.D. students, the program displays the level of bot activity around specific election-related conversations, as well as the topics, user names and hashtags they're currently pushing.
-end-
Additional authors on the study are Alessandro Flammini, a professor in the IU School of Informatics, Computing and Engineering; Kai-Cheng "Kevin" Yang, an IU Ph.D. student; Chengcheng Shao of the National University of Defense Technology in China, who was a visiting professor at IU at the time of the study; and Onur Varol of Northeastern University, who was a Ph.D. student at IU at the time of the study. Ciampaglia is now an assistant professor at the University of Southern Florida.

This work was supported in part by National Science Foundation, the James S. McDonnell Foundation and the Democracy Fund.

Indiana University

Related Social Networks Articles from Brightsurf:

AI methods of analyzing social networks find new cell types in tissue
In situ sequencing enables gene activity inside body tissues to be depicted in microscope images.

Teen social networks linked to adult depression
Teens who have a larger number of friends may be less likely to suffer from depression later in life, especially women, a new MSU research study has found.

Drexel study: Measuring social networks of young adults with autism
While social isolation is a core challenge associated with autism, researchers from Drexel University's A.J.

Study suggests optimal social networks of no more than 150 people
New rules of engagement on the battlefield will require a deep understanding of networks and how they operate according to new Army research.

Social networks can support academic success
Social networks have been found to influence academic performance: students tend to perform better with high-performers among their friends, as some people are capable of inspiring others to try harder, according to Sofia Dokuka, Dilara Valeyeva and Maria Yudkevich of the HSE University.

Brain builds and uses maps of social networks, physical space, in the same way
Even in these social-distanced days, we keep in our heads a map of our relationships with other people: family, friends, coworkers and how they relate to each other.

Twitter fight: Birds use social networks to pick opponents wisely
In a new article published in the journal Current Opinion in Psychology, UC biologist Elizabeth Hobson says animals such as monk parakeets seem to understand where they fit in a dominance hierarchy and pick their fights accordingly.

Study questions benefits of social networks to disaster response
Faced with a common peril, people delay making decisions that might save lives, fail to alert each other to danger and spread misinformation.

'McDonaldization' based analysis of Russian social networks
The author describes his concept this way: 'the principles of the fast-food restaurant are coming to dominate more and more sectors of recent'.

Hunter-gatherers facilitated a cultural revolution through small social networks
Hunter-gatherer ancestors, from around 300,000 years ago, facilitated a cultural revolution by developing ideas in small social networks, and regularly drawing on knowledge from neighbouring camps, suggests a new study by UCL and University of Zurich.

Read More: Social Networks News and Social Networks Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.