(2021) Differential Privacy Defenses and Sampling Attacks for Membership Inference.
In: 14th ACM Workshop on Artificial Intelligence and Security, co-located with the 28th ACM Conference on Computer and Communications Security.
Conference:
AISec ACM Workshop on Artificial Intelligence and Security
|
Text
sampling_attacks_AISec21.pdf Download (695kB) | Preview |
Official URL: https://doi.org/10.1145/3474369.3486876
Abstract
Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a specific data point in her possession was used to train a model. While all previous membership inference attacks rely on access to the posterior probabilities, we present the first attack which only relies on the predicted class label - yet shows high success rate.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Mario Fritz (MF) |
Conference: | AISec ACM Workshop on Artificial Intelligence and Security |
Depositing User: | Tobias Lorenz |
Date Deposited: | 07 Dec 2021 11:17 |
Last Modified: | 08 Dec 2021 09:05 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3524 |
Actions
Actions (login required)
View Item |