(2019) ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models.
|
Text
NDSS19-ML-Leaks.pdf Download (605kB) | Preview |
Abstract
Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications. However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model's training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains. In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Yang Zhang (YZ) Mario Fritz (MF) Michael Backes (InfSec) |
Conference: | NDSS Network and Distributed System Security Symposium |
Depositing User: | Yang Zhang |
Date Deposited: | 11 Jan 2019 13:42 |
Last Modified: | 12 May 2021 09:40 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/2754 |
Actions
Actions (login required)
View Item |