Skip to main content

Research Repository

Advanced Search

Evaluation of Federated Machine Unlearning using Membership Inference Attacks

People Involved

Project Description

Federated Learning (FL) is a type of distributed Machine Learning (ML) training technique that allows for accurate predictions without sharing sensitive data. This is possible, as FL allows everyone to train their own localised model and only share the updated configuration that the model uses. Due to this, it has seen applications in various industries, including healthcare and finance. It is important to note that FL does have to comply with privacy legislation, such as GDPR. This includes the ability for an individual to request that any data about themselves be removed. It would not be sufficient to remove the individual’s data from any databases, as the models will have learned from this data. It is also worth noting that just like traditional ML models, FL is susceptible to attacks. One example of a type of attack is “membership inference” which detects the data used to train an ML model and results in a potential breach of personal data. In addition, there are also attacks which can degrade the accuracy of FL systems, such as data poisoning. Due to the above issues, it is crucial that methods exist to remove data efficiently and accurately, whether that be at the request of a user or to mitigate the damage of an attack against the FL system. This is known as machine unlearning. There have been various machine unlearning approaches for FL, however, there is limited research in methods to verify that such approaches are removing data accurately. This is important because it helps build trust between required stakeholders in that their deletion requests will be processed in compliance with legislation. It also provides security teams with the assurance that their methods will mitigate the damage of certain attacks. Therefore, this proposed PhD project aims to explore the use of membership inference attacks to verify that samples are being effectively unlearned within FL systems. Using this type of attack and several key metrics, this project will develop a framework to evaluate current machine unlearning approaches for FL. From this, a novel machine unlearning approach for FL will be designed, developed, and evaluated.

Project Acronym FedUnlearner
Status Project Live
Funder(s) Carnegie Trust for the Universities of Scotland
Value £73,564.00
Project Dates Oct 1, 2023 - Sep 30, 2026



You might also like

MemoryCrypt

MemoryCrypt Sep 1, 2019 - Feb 29, 2020
The usage of encryption keys is an important aspect in preserving privacy in communications. Increasingly these communications are protected using SSL/TLS methods. Along with this, there is a general move towards using virtualised infrastructures for... Read More about MemoryCrypt.