Inference Attacks Against Graph Neural Networks

Zhang, Zhikun and Chen, Min and Backes, Michael and Shen, Yun and Zhang, Yang
(2022) Inference Attacks Against Graph Neural Networks.
In: 31st USENIX Security Symposium, August 10-12, 2022, Boston, MA, USA.
Conference: USENIX-Security Usenix Security Symposium

[img]
Preview
Other (PDF)
sec22summer_zhang-zhikun.pdf - Accepted Version

Download (5MB) | Preview
Official URL: https://www.usenix.org/system/files/sec22summer_zh...

Abstract

Graph is an important data representation ubiquitously existing in the real world. However, analyzing the graph data is computationally difficult due to its non-Euclidean nature. Graph embedding is a powerful tool to solve the graph analytics problem by transforming the graph data into lowdimensional vectors. These vectors could also be shared with third parties to gain additional insights of what is behind the data. While sharing graph embedding is intriguing, the associated privacy risks are unexplored. In this paper, we systematically investigate the information leakage of the graph embedding by mounting three inference attacks. First, we can successfully infer basic graph properties, such as the number of nodes, the number of edges, and graph density, of the target graph with up to 0.89 accuracy. Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph. For instance, we achieve 0.98 attack AUC on the DD dataset. Third, we propose a novel graph reconstruction attack that can reconstruct a graph that has similar graph structural statistics to the target graph. We further propose an effective defense mechanism based on graph embedding perturbation to mitigate the inference attacks without noticeable performance degradation for graph classification tasks.

Actions

Actions (login required)

View Item View Item