Tom Titcombe
Practical defences against model inversion attacks for split neural networks
Titcombe, Tom; Hall, Adam James; Papadopoulos, Pavlos; Romanini, Daniele
Authors
Abstract
We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.
Citation
Titcombe, T., Hall, A. J., Papadopoulos, P., & Romanini, D. (2021, May). Practical defences against model inversion attacks for split neural networks. Paper presented at ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021), Online
Presentation Conference Type | Conference Paper (unpublished) |
---|---|
Conference Name | ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021) |
Start Date | May 7, 2021 |
Publication Date | Apr 21, 2021 |
Deposit Date | Oct 31, 2022 |
Publicly Available Date | Nov 1, 2022 |
Public URL | http://researchrepository.napier.ac.uk/Output/2946016 |
Publisher URL | https://dp-ml.github.io/2021-workshop-ICLR/ |
Files
Practical Defences Against Model Inversion Attacks For Split Neural Networks
(517 Kb)
PDF
You might also like
Towards The Creation Of The Future Fish Farm
(2023)
Journal Article
Investigating Machine Learning Attacks on Financial Time Series Models
(2022)
Journal Article
GLASS: A Citizen-Centric Distributed Data-Sharing Model within an e-Governance Architecture
(2022)
Journal Article