SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders

Cong, Tianshuo and He, Xinlei and Zhang, Yang
(2022) SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders.
In: CCS 2022.
Conference: CCS ACM Conference on Computer and Communications Security

[img]
Preview
Text
CCS22-SSLGuard.pdf

Download (1MB) | Preview

Abstract

Self-supervised learning is an emerging machine learning (ML) paradigm. Compared to supervised learning that leverages high-quality labeled datasets to achieve good performance, self-supervised learning relies on unlabeled datasets to pre-train powerful encoders which can then be treated as feature extractors for various downstream tasks. The huge amount of data and computational resources consumption makes the encoders themselves become a valuable intellectual property of the model owner. Recent research has shown that the ML model's copyright is threatened by model stealing attacks, which aims to train a surrogate model to mimic the behavior of a given model. We empirically show that pre-trained encoders are highly vulnerable to model stealing attacks. However, most of the current efforts of copyright protection algorithms such as fingerprinting and watermarking concentrate on classifiers. Meanwhile, the intrinsic challenges of pre-trained encoder's copyright protection remain largely unstudied. We fill the gap by proposing SSLGuard, the first watermarking algorithm for pre-trained encoders. Given a clean pre-trained encoder, SSLGuard embeds a watermark into it and outputs a watermarked version. The shadow training technique is also applied to preserve the watermark under potential model stealing attacks. Our extensive evaluation shows that SSLGuard is effective in watermark injection and verification, and is robust against model stealing and other watermark removal attacks such as pruning and finetuning.

Actions

Actions (login required)

View Item View Item