Practical defences against model inversion attacks for split neural networks
(2021)
Presentation / Conference Contribution
Titcombe, T., Hall, A. J., Papadopoulos, P., & Romanini, D. (2021, May). Practical defences against model inversion attacks for split neural networks. Paper presented at ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021), Online
We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge... Read More about Practical defences against model inversion attacks for split neural networks.