UnGANable: Defending Against GAN-based Face Manipulation

Li, Zheng and Yu, Ning and Salem, Ahmed and Backes, Michael and Fritz, Mario and Zhang, Yang
(2023) UnGANable: Defending Against GAN-based Face Manipulation.
In: USENIX Security.
Conference: USENIX-Security Usenix Security Symposium

[img] Text
USENIXSECURITY23-UnGANable.pdf

Download (18MB)

Abstract

Deepfakes pose severe threats of visual misinformation to our society. One representative deepfake application is face manipulation that modifies a victim’s facial attributes in an image, e.g., changing her age or hair color. The state-of-the-art face manipulation techniques rely on Generative Adversarial Networks (GANs). In this paper, we propose the first defense system, namely UnGANable, against GAN-inversionbased face manipulation. In specific, UnGANable focuses on defending GAN inversion, an essential step for face manipulation. Its core technique is to search for alternative images (called cloaked images) around the original images (called target images) in image space. When posted online, these cloaked images can jeopardize the GAN inversion process. We consider two state-of-the-art inversion techniques including optimization-based inversion and hybrid inversion, and design five different defenses under five scenarios depending on the defender’s background knowledge. Extensive experiments on four popular GAN models trained on two benchmark face datasets show that UnGANable achieves remarkable effectiveness and utility performance, and outperforms multiple baseline methods. We further investigate four adaptive adversaries to bypass UnGANable and show that some of them are slightly effective.

Actions

Actions (login required)

View Item View Item