Adversarial Initialization - when your network performs the way I want -

Grosse, Kathrin and Trost, Thomas A. and Mosbach, Marius and Backes, Michael
(2019) Adversarial Initialization - when your network performs the way I want -.
ArXiv e-prints.

[img]
Preview
Text
main.pdf

Download (662kB) | Preview

Abstract

The increase in computational power and available data has fueled a wide deployment of deep learning in production environments. Despite their successes, deep architectures are still poorly understood and costly to train. We demonstrate in this paper how a simple recipe enables a market player to harm or delay the development of a competing product. Such a threat model is novel and has not been considered so far. We derive the corresponding attacks and show their efficacy both formally and empirically. These attacks only require access to the initial, untrained weights of a network. No knowledge of the problem domain and the data used by the victim is needed. On the initial weights, a mere permutation is sufficient to limit the achieved accuracy to for example 50% on the MNIST dataset or double the needed training time. While we can show straightforward ways to mitigate the attacks, the respective steps are not part of the standard procedure taken by developers so far.

Actions

Actions (login required)

View Item View Item