Skip to main content

Research Repository

Advanced Search

Can Federated Models Be Rectified Through Learning Negative Gradients?

Tahir, Ahsen; Tan, Zhiyuan; Babaagba, Kehinde O.


Ahsen Tahir


Federated Learning (FL) is a method to train machine learning (ML) models in a decentralised manner, while preserving the privacy of data from multiple clients. However, FL is vulnerable to malicious attacks, such as poisoning attacks, and is challenged by the GDPR’s “right to be forgotten”. This paper introduces a negative gradient-based machine learning technique to address these issues. Experiments on the MNIST dataset show that subtracting local model parameters can remove the influence of the respective training data on the global model and consequently “unlearn” the model in the FL paradigm. Although the performance of the resulting global model decreases, the proposed technique maintains the validation accuracy of the model above 90%. This impact on performance is acceptable for an FL model. It is important to note that the experimental work carried out demonstrates that in application areas where data deletion in ML is a necessity, this approach represents a significant advancement in the development of secure and robust FL systems.

Presentation Conference Type Conference Paper (Published)
Conference Name 13th EAI International Conference, BDTA 2023
Online Publication Date Jan 31, 2024
Publication Date 2024
Deposit Date Feb 2, 2024
Publicly Available Date Feb 1, 2025
Publisher Springer
Pages 18-32
Series Title Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (LNICST)
Series Number 555
Series ISSN 1867-8211
Book Title Big Data Technologies and Applications
ISBN 978-3-031-52264-2
Keywords Federated Learning, Machine Unlearning, Negative Gradients, Model Rectification
Public URL