Dr Thomas Tan Z.Tan@napier.ac.uk
Associate Professor
Evaluation of Federated Machine Unlearning using Membership Inference Attacks
People Involved
Project Description
Federated Learning (FL) is a type of distributed Machine Learning (ML) training technique that allows for accurate predictions without sharing sensitive data. This is possible, as FL allows everyone to train their own localised model and only share the updated configuration that the model uses. Due to this, it has seen applications in various industries, including healthcare and finance. It is important to note that FL does have to comply with privacy legislation, such as GDPR. This includes the ability for an individual to request that any data about themselves be removed. It would not be sufficient to remove the individual’s data from any databases, as the models will have learned from this data. It is also worth noting that just like traditional ML models, FL is susceptible to attacks. One example of a type of attack is “membership inference” which detects the data used to train an ML model and results in a potential breach of personal data. In addition, there are also attacks which can degrade the accuracy of FL systems, such as data poisoning. Due to the above issues, it is crucial that methods exist to remove data efficiently and accurately, whether that be at the request of a user or to mitigate the damage of an attack against the FL system. This is known as machine unlearning. There have been various machine unlearning approaches for FL, however, there is limited research in methods to verify that such approaches are removing data accurately. This is important because it helps build trust between required stakeholders in that their deletion requests will be processed in compliance with legislation. It also provides security teams with the assurance that their methods will mitigate the damage of certain attacks. Therefore, this proposed PhD project aims to explore the use of membership inference attacks to verify that samples are being effectively unlearned within FL systems. Using this type of attack and several key metrics, this project will develop a framework to evaluate current machine unlearning approaches for FL. From this, a novel machine unlearning approach for FL will be designed, developed, and evaluated.
Project Acronym | FedUnlearner |
---|---|
Status | Project Live |
Funder(s) | Carnegie Trust for the Universities of Scotland |
Value | £73,564.00 |
Project Dates | Oct 1, 2023 - Sep 30, 2026 |
You might also like
MemoryCrypt Sep 1, 2019 - Feb 29, 2020
The usage of encryption keys is an important aspect in preserving privacy in communications. Increasingly these communications are protected using SSL/TLS methods. Along with this, there is a general move towards using virtualised infrastructures for...
Read More about MemoryCrypt.
Repairing Polluted Artificial Intelligent Systems with Machine Unlearning Dec 1, 2019 - Jul 31, 2021
This project is intended to seek in-depth understanding of the new promising decentralised machine learning scheme, namely federated learning, and develop a proof-of-concept algorithm-independent unlearning scheme for federated learning. Our new mach...
Read More about Repairing Polluted Artificial Intelligent Systems with Machine Unlearning.
Security and Privacy in Vehicular Ad-hoc Networks Mar 1, 2022 - Sep 30, 2024
The great leap forward in wireless communications technology drives the recent advancements of Vehicular Ad hoc NETworks (VANETs). As a key part of the Intelligent Transportation Systems (ITS) framework, VANETs offer active road safety, and traffic e...
Read More about Security and Privacy in Vehicular Ad-hoc Networks.
Divulging the Secrets of Artificial Intelligence Apr 1, 2022 - Oct 31, 2022
Digital Infrastructures are complex systems that are built upon computing and communication hardware, and whose resilience is undermined by the security of its hardware building blocks, which has not received much consideration in the past. A focus o...
Read More about Divulging the Secrets of Artificial Intelligence.
Behaviour profiling using Touch-based Behavioural Biometrics for Continuous Authentication Systems May 9, 2022 - Jul 31, 2022
Authenticating smartphone users from distinct behavioural biometric patterns provided through human interactions on touch screens would allow users to be continuously authenticated on their devices while protecting them from unauthorised use. More im...
Read More about Behaviour profiling using Touch-based Behavioural Biometrics for Continuous Authentication Systems.