MEDIA CONTACT:
Cate Douglass; cdouglass@gwu.edu
More than 50 countries are set to hold national elections this year and analysts have long sounded the alarm on the threat of bad actors using artificial intelligence (AI) to disseminate and amplify disinformation during the election season across the globe.
Now, a new study led by researchers at the George Washington University predicts that daily, bad-actor AI activity is going to escalate by mid-2024, increasing the threat that it could affect election results. The research, published today in the journal PNAS Nexus , is the first quantitative scientific analysis that predicts how bad actors will misuse AI globally.
“Everybody is talking about the dangers of AI, but until our study there was no science of this threat,” Neil Johnson, lead study author and a professor of physics at GW, says. “You cannot win a battle without a deep understanding of the battlefield.”
The researchers say the study answers the what, where, and when AI will be used by bad actors globally, and how it can be controlled. Among their findings:
The paper, " Controlling bad-actor-AI activity at scale across online battlefields ,” was published in the journal PNAS Nexus . The research was funded by the U.S. Air Force Office for Scientific Research and The Templeton Foundation. If you would like to speak with Prof. Johnson, please contact GW Senior Media Relations Specialist Cate Douglass at cdouglass@gwu.edu .
-GW-
PNAS Nexus
Controlling bad-actor-artificial intelligence activity at scale across online battlefields
23-Jan-2024