Deepfake detectors can be defeated, computer scientists show for the first time

February 08, 2021

Systems designed to detect deepfakes --videos that manipulate real-life footage via artificial intelligence--can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021.

Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.

"Our work shows that attacks on deepfake detectors could be a real-world threat," said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper. "More alarmingly, we demonstrate that it's possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector."

In deepfakes, a subject's face is modified in order to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.

The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. ""If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it," " said Paarth Neekhara, the paper's other first coauthor and a UC San Diego computer science student.

Researchers created an adversarial example for every face in a video frame. But while standard operations such as compressing and resizing video usually remove adversarial examples from an image, these examples are built to withstand these processes. The attack algorithm does this by estimating over a set of input transformations how the model ranks images as real or fake. From there, it uses this estimation to transform images in such a way that the adversarial image remains effective even after compression and decompression.??

The modified version of the face is then inserted in all the video frames. The process is then repeated for all frames in the video to create a deepfake video. The attack can also be applied on detectors that operate on entire video frames as opposed to just face crops.

The team declined to release their code so it wouldn't be used by hostile parties.

High success rate

Researchers tested their attacks in two scenarios: one where the attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model; and one where attackers can only query the machine 
 learning model to figure out the probabilities of a frame being classified as real or fake. In the first scenario, the attack's success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos. This is the first work which demonstrates successful attacks on state-of-the-art deepfake detectors.

"To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses," the researchers write. "We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector."

To improve detectors, researchers recommend an approach similar to what is known as adversarial training: during training, an adaptive adversary continues to generate new deepfakes that can bypass the current state of the art detector; and the detector continues improving in order to detect the new deepfakes.
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples

*Shehzeen Hussain, Malhar Jere, Farinaz Koushanfar, Department of Electrical and Computer Engineering, UC San Diego Paarth Neekhara, Julian McAuley, Department of Computer Science and Engineering, UC San Diego

University of California - San Diego

Related Adaptive Adversary Articles from Brightsurf:

Computerized adaptive screener may help identify youth at risk for suicide
Researchers funded by NIMH have developed a computerized adaptive screener to identify youth at risk for attempting suicide.

Optical scanner design for adaptive driving beam systems can lead to safer night driving
In a recent study published in the Journal of Optical Microsystems, researchers from Japan have come up with an alternative to conventional adaptive driving beam systems: a microelectromechanical systems (MEMS) optical scanner that relies on the piezoelectric effect of electrically induced mechanical vibrations.

Adaptive optics with cascading corrective elements
As reported in Advanced Photonics, researchers from the University of Freiburg, Germany, have made a significant advance in AO microscopy through the demonstration of a new AO module comprising two deformable phase plates (DPPs).

Using water fleas, UTA researchers investigate adaptive evolution
Researchers from The University of Texas at Arlington resurrected the preserved eggs of a shrimp-like crustacean to examine long-standing questions about adaptive evolution, reporting the results in the journal Proceedings of the National Academy of Science.

Adaptive Image Receive (AIR) coil from GE shows promise for whole-brain imaging
According to an article in ARRS' American Journal of Roentgenology (AJR), a prototype 16-channel head Adaptive Image Receive (AIR) radiofrequency coil from GE Healthcare outperformed a conventional 8-channel head coil for in vivo whole-brain imaging, though it did not perform as well as a conventional 32-channel head coil.

New method identifies adaptive mutations in complex evolving populations
A research team co-led by a scientist at the Hong Kong University of Science and Technology has developed a method to study how HIV mutates to escape the immune system in multiple patients, which could inform HIV vaccine design.

Adaptive governance could help build trust in COVID-19 digital contact tracing apps
Adaptive governance could help earn social license of digital contact tracing apps as a way of managing COVID-19, authors say in this Policy Forum.

Study finds odor-sensing neuron regeneration process is adaptive
Results show that diminished odor stimulation reduces the number of newly-generated neurons that express particular odorant receptors, indicating a selective alteration in the neurogenesis of these neuron subtypes.

Deep-brain imaging at synaptic resolution with adaptive optics 2-photon endomicroscopy
Recognizing the need for improved imaging capabilities, a group of scientists from the Hong Kong University of Science and Technology (HKUST) focused their sights on achieving brain imaging at synaptic resolution.

Scale-adaptive auto-context-guided fetal US segmentation with structured random forests Announcing a new article publication for BIO Integration journal.

Read More: Adaptive Adversary News and Adaptive Adversary Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to