GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models

Chen, Dingfan and Yu, Ning and Zhang, Yang and Fritz, Mario
(2020) GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models.
In: ACM Conference on Computer and Communications Security (CCS).
Conference: CCS ACM Conference on Computer and Communications Security

[img] Text
1909.03935.pdf

Download (7MB)
Official URL: https://doi.org/10.1145/3372297.3417238

Abstract

Deep learning has achieved overwhelming success, spanning from discriminative models to generative models. In particular, deep generative models have facilitated a new level of performance in a myriad of areas, ranging from media manipulation to sanitized dataset generation. Despite the great success, the potential risks of privacy breach caused by generative models have not been analyzed systematically. In this paper, we focus on membership inference attack against deep generative models that reveals information about the training data used for victim models. Specifically, we present the first taxonomy of membership inference attacks, encompassing not only existing attacks but also our novel ones. In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models. Moreover, we provide a theoretically grounded attack calibration technique, which consistently boosts the attack performance in all cases, across different attack settings, data modalities, and training configurations. We complement the systematic analysis of attack performance by a comprehensive experimental study, that investigates the effectiveness of various attacks w.r.t. model type and training configurations, over three diverse application scenarios (i.e., images, medical data, and location data).

Actions

Actions (login required)

View Item View Item