(2020) GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models.
|
Text
1909.03935.pdf Download (7MB) | Preview |
Abstract
Deep learning has achieved overwhelming success, spanning from discriminative models to generative models. In particular, deep generative models have facilitated a new level of performance in a myriad of areas, ranging from media manipulation to sanitized dataset generation. Despite the great success, the potential risks of privacy breach caused by generative models have not been analyzed systematically. In this paper, we focus on membership inference attack against deep generative models that reveals information about the training data used for victim models. Specifically, we present the first taxonomy of membership inference attacks, encompassing not only existing attacks but also our novel ones. In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models. Moreover, we provide a theoretically grounded attack calibration technique, which consistently boosts the attack performance in all cases, across different attack settings, data modalities, and training configurations. We complement the systematic analysis of attack performance by a comprehensive experimental study, that investigates the effectiveness of various attacks w.r.t. model type and training configurations, over three diverse application scenarios (i.e., images, medical data, and location data).
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Mario Fritz (MF) |
Conference: | CCS ACM Conference on Computer and Communications Security |
Depositing User: | Mario Fritz |
Date Deposited: | 25 May 2020 13:43 |
Last Modified: | 12 May 2021 17:05 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3089 |
Actions
Actions (login required)
View Item |