(2022) Are Defenses for Graph Neural Networks Robust?
|
Text
are_defenses_for_graph_neural_networks_robust.pdf - Published Version Download (2MB) | Preview |
Abstract
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Aleksandar Bojchevski (AB) |
Conference: | NeurIPS Conference on Neural Information Processing Systems |
Depositing User: | Aleksandar Bojchevski |
Date Deposited: | 13 Oct 2022 04:36 |
Last Modified: | 13 Oct 2022 04:36 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3812 |
Actions
Actions (login required)
View Item |