Assessing Model-free Anomaly Detection in Industrial Control Systems Against Generic Concealment Attacks

Erba, Alessandro and Tippenhauer, Nils Ole
(2022) Assessing Model-free Anomaly Detection in Industrial Control Systems Against Generic Concealment Attacks.
In: Proceedings of the Annual Computer Security Applications Conference (ACSAC).
Conference: ACSAC Annual Computer Security Applications Conference

[img]
Preview
Text
Erba_ACSAC_22.pdf

Download (1MB) | Preview
Official URL: https://www.doi.org/10.1145/3564625.3564633

Abstract

In recent years, a number of model-free process-based anomaly detection schemes for Industrial Control Systems (ICS) were proposed. Model-free anomaly detectors are trained directly from process data and do not require process knowledge. They are validated based on a set of public data with limited attacks present. As result, the resilience of those schemes against general concealment attacks is unclear. In addition, no structured discussion on the properties verified by the detectors exists. In this work, we provide the first systematic analysis of such anomaly detection schemes, focusing on six model-free process-based anomaly detectors. We hypothesize that the detectors verify a combination of temporal, spatial, and statistical consistencies. To test this, we systematically analyse their resilience against generic concealment attacks. Our generic concealment attacks are designed to violate a specific consistency verified by the detector, and require no knowledge of the attacked physical process or the detector. In addition, we compare against prior work attacks that were designed to attack neural network-based detectors. Our results demonstrate that the evaluated model-free detectors are in general susceptible to generic concealment attacks. For each evaluated detector, at least one of our generic concealment attacks performs better than prior work attacks. In particular, the results allow us to show which specific consistencies are verified by each detector. We also find that prior work attacks that target neural-network architectures transfer surprisingly well against other architectures.

Actions

Actions (login required)

View Item View Item