(2022) Membership Inference Attacks by Exploiting Loss Trajectory.
|
Text
CCS22-LossTrajectory.pdf Download (2MB) | Preview |
Abstract
Machine learning models are vulnerable to membership inference attacks in which an adversary aims to predict whether or not a particular sample was contained in the target model’s training dataset. Existing attack methods have commonly exploited the output information (mostly, losses) solely from the given target model. As a result, in practical scenarios where both the member and nonmember samples yield similarly small losses, these methods are naturally unable to differentiate between them. To address this limitation, in this paper, we propose a new attack method, called TrajectoryMIA, which can exploit the membership information from the whole training process of the target model for improving the attack performance. To mount the attack in the common blackbox setting, we leverage knowledge distillation, and represent the membership information by the losses evaluated on a sequence of intermediate models at different distillation epochs, namely distilled loss trajectory, together with the loss from the given target model. Experimental results over different datasets and model architectures demonstrate the great advantage of our attack in terms of different metrics. For example, on CINIC-10, our attack achieves at least 6× higher true-positive rate at a low false-positive rate of 0.1% than existing methods. Further analysis demonstrates the general effectiveness of our attack in more strict scenarios.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Yang Zhang (YZ) |
Conference: | CCS ACM Conference on Computer and Communications Security |
Depositing User: | Yang Zhang |
Date Deposited: | 12 Oct 2022 15:40 |
Last Modified: | 15 Oct 2022 15:44 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3797 |
Actions
Actions (login required)
View Item |