(2018) Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification.
Abstract
The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (<10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.
Item Type: | Article |
---|---|
Divisions: | Michael Backes (InfSec) |
Depositing User: | Kathrin Grosse |
Date Deposited: | 02 Jul 2018 09:16 |
Last Modified: | 18 Jul 2019 11:54 |
Primary Research Area: | NRA2: Reliable Security Guarantees |
URI: | https://publications.cispa.saarland/id/eprint/2610 |
Actions
Actions (login required)
View Item |