Skip to main content

Research Repository

Advanced Search

All Outputs (3)

Evaluating Tooling and Methodology when Analysing Bitcoin Mixing Services After Forensic Seizure (2021)
Presentation / Conference Contribution
Young, E. H., Chrysoulas, C., Pitropakis, N., Papadopoulos, P., & Buchanan, W. J. (2021, October). Evaluating Tooling and Methodology when Analysing Bitcoin Mixing Services After Forensic Seizure. Paper presented at International Conference on Data Analyt

Little or no research has been directed to analysis and researching forensic analysis of the Bitcoin mixing or 'tumbling' service themselves. This work is intended to examine effective tooling and methodology for recovering forensic artifacts from tw... Read More about Evaluating Tooling and Methodology when Analysing Bitcoin Mixing Services After Forensic Seizure.

PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN (2021)
Presentation / Conference Contribution
Romanini, D., Hall, A. J., Papadopoulos, P., Titcombe, T., Ismail, A., Cebere, T., …Hoeh, M. A. (2021, May). PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN. Poster presented at ICLR 2021 Workshop on Distributed and Private

We introduce PyVertical, a framework supporting vertical federated learning using split neural networks. The proposed framework allows a data scientist to train neural networks on data features vertically partitioned across multiple owners while keep... Read More about PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN.

Practical defences against model inversion attacks for split neural networks (2021)
Presentation / Conference Contribution
Titcombe, T., Hall, A. J., Papadopoulos, P., & Romanini, D. (2021, May). Practical defences against model inversion attacks for split neural networks. Paper presented at ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021), Online

We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge... Read More about Practical defences against model inversion attacks for split neural networks.