(2022) Amplifying Membership Exposure via Data Poisoning.
Abstract
As in-the-wild data are increasingly involved in the training stage, machine learning applications become more susceptible to data poisoning attacks. Such attacks typically lead to test-time accuracy degradation or controlled misprediction. In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples. To this end, we demonstrate a set of data poisoning attacks to amplify the membership exposure of the targeted class. We first propose a generic dirty-label attack for supervised classification algorithms. We then propose an optimization-based clean-label attack in the transfer learning scenario, whereby the poisoning samples are correctly labeled and look “natural” to evade human moderation. We extensively evaluate our attacks on computer vision benchmarks. Our results show that the proposed attacks can substantially increase the membership inference precision with minimum overall test-time model performance degradation. To mitigate the potential negative impacts of our attacks, we also investigate feasible countermeasures.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Yang Zhang (YZ) |
Conference: | NeurIPS Conference on Neural Information Processing Systems |
Depositing User: | Yang Zhang |
Date Deposited: | 20 Nov 2022 22:10 |
Last Modified: | 20 Nov 2022 22:10 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3876 |
Actions
Actions (login required)
View Item |