Skip to main content

Research Repository

Advanced Search

Practical defences against model inversion attacks for split neural networks

Titcombe, Tom; Hall, Adam James; Papadopoulos, Pavlos; Romanini, Daniele


Tom Titcombe

Adam James Hall

Daniele Romanini


We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.

Presentation Conference Type Conference Paper (unpublished)
Conference Name ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021)
Start Date May 7, 2021
Publication Date Apr 21, 2021
Deposit Date Oct 31, 2022
Publicly Available Date Nov 1, 2022
Public URL
Publisher URL


You might also like

Downloadable Citations