SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

Zhang, Boyang and Zheng, Li and Yang, Ziqing and He, Xinlei and Backes, Michael and Fritz, Mario and Zhang, Yang
(2024) SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models.
In: 33rd USENIX Security Symposium, Philadelphia, PA, USA.
Conference: USENIX-Security Usenix Security Symposium
(In Press)

Full text not available from this repository.

Abstract

While advanced machine learning (ML) models are deployed in numerous real-world applications, previous works demonstrate these models have security and privacy vulnerabilities. Various empirical research has been done in this field. However, most of the experiments are performed on target ML models trained by the security researchers themselves. Due to the high computational resource requirement for training advanced models with complex architectures, researchers generally choose to train a few target models using relatively simple architectures on typical experiment datasets. We argue that to understand ML models' vulnerabilities comprehensively, experiments should be performed on a large set of models trained with various purposes (not just the purpose of evaluating ML attacks and defenses). To this end, we propose using publicly available models with weights from the Internet (public models) for evaluating attacks and defenses on ML models. We establish a database, namely SecurityNet, containing 910 annotated image classification models. We then analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on these public models. Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models. We share SecurityNet with the research community. and advocate researchers to perform experiments on public models to better demonstrate their proposed methods' effectiveness in the future.

Actions

Actions (login required)

View Item View Item