Differential Privacy Defenses and Sampling Attacks for Membership Inference

Rahimian, Shadi and Orekondy, Tribhuvanesh and Fritz, Mario
(2021) Differential Privacy Defenses and Sampling Attacks for Membership Inference.
In: 14th ACM Workshop on Artificial Intelligence and Security, co-located with the 28th ACM Conference on Computer and Communications Security.
Conference: AISec ACM Workshop on Artificial Intelligence and Security

[img]
Preview
Text
sampling_attacks_AISec21.pdf

Download (695kB) | Preview
Official URL: https://doi.org/10.1145/3474369.3486876

Abstract

Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a specific data point in her possession was used to train a model. While all previous membership inference attacks rely on access to the posterior probabilities, we present the first attack which only relies on the predicted class label - yet shows high success rate.

Actions

Actions (login required)

View Item View Item