Bluesky Facebook Reddit Email

How to survive the explosion of AI slop

07.29.25 | PNAS Nexus

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.


In a Perspective, Hany Farid highlights the risk of manipulated and fraudulent images and videos, known as deepfakes, and explores interventions that could mitigate the harms deepfakes can cause. Farid explains that visually discriminating the real from the fake has become increasingly difficult and summarizes his research on digital forensic techniques, used to determine whether images and videos have been manipulated. Farid celebrates the positive uses of generative AI, including helping researchers, democratizing content creation, and, in some cases, literally giving voice to those whose voice has been silenced by disability. But he warns against harmful uses of the technology, including non-consensual intimate imagery, child sexual abuse imagery, fraud, and disinformation. In addition, the existence of deepfake technology means that malicious actors can cast doubt on legitimate images by simply claiming the images are made with AI. So, what is to be done? Farid highlights a range of interventions to mitigate such harms, including legal requirements to mark AI content with metadata and imperceptible watermarks, limits on what prompts should be allowed by services, and systems to link user identities to created content. In addition, social media content moderators should ban harmful images and videos. Furthermore, Farid calls for digital media literacy to be part of the standard educational curriculum. Farid summarizes the authentication techniques that can be used by experts to sort the real from the synthetic, and explores the policy landscape around harmful content. Finally, Farid asks researchers to stop and question if their research output can be misused and if so, whether to take steps to prevent misuse or even abandon the project altogether. Just because something can be created does not mean it must be created.

PNAS Nexus

Mitigating the harms of manipulated media: Confronting deepfakes and digital deception

29-Jul-2025

H.F. is the co-founder and Chief Science Officer at GetReal Labs, an advisor to the Content Authenticity Initiative, serves on the Board of Directors of the not for profit Cyber Civil Rights Initiative, and is a LinkedIn Scholar.

Keywords

Article Information

Contact Information

Emmanuelle Saliba
GetReal Security
emmanuelle@getreallabs.com

Source

How to Cite This Article

APA:
PNAS Nexus. (2025, July 29). How to survive the explosion of AI slop. Brightsurf News. https://www.brightsurf.com/news/19NKN691/how-to-survive-the-explosion-of-ai-slop.html
MLA:
"How to survive the explosion of AI slop." Brightsurf News, Jul. 29 2025, https://www.brightsurf.com/news/19NKN691/how-to-survive-the-explosion-of-ai-slop.html.