(2023) Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network.
Abstract
Model inversion (MI) attacks have raised increasing concerns about privacy, which can reconstruct training data from public models. Indeed, MI attacks can be formalized as an optimization problem that seeks private data in a certain space. Recent MI attacks leverage a generative adversarial network (GAN) as image prior to narrow the search space, and can successfully reconstruct even the high-dimensional data (e.g., face images). However, these GAN-based MI attacks do not fully exploit the potential capabilities of the target model, still leading to a vague and coupled search space. i.e., different classes of images are coupled in the search space. Besides, the widely used cross-entropy loss in these attacks suffers from gradient vanishing. To address these problems, we propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN). At first, a top-\emph{n} selection strategy is proposed to provide pseudo-labels for public data, and use pseudo-labels to guide the training of cGAN. In this way, the search space is decoupled for different classes of images. Then a max-margin loss is introduced to improve the search process on the subspace of a targeted class. Extensive experiments demonstrate that our PLG-MI attack significantly improves the attack success rate and visual quality for various datasets and models, notably, $2\sim3 \times$ better that state-of-the-art attacks under strong distributional shifts.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Yang Zhang (YZ) |
Conference: | AAAI National Conference of the American Association for Artificial Intelligence |
Depositing User: | Yang Zhang |
Date Deposited: | 20 Nov 2022 22:22 |
Last Modified: | 20 Nov 2022 22:22 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3879 |
Actions
Actions (login required)
View Item |