Skip to main content

Research Repository

Advanced Search

All Outputs (5)

Privacy and Trust Redefined in Federated Machine Learning (2021)
Journal Article
Papadopoulos, P., Abramson, W., Hall, A. J., Pitropakis, N., & Buchanan, W. J. (2021). Privacy and Trust Redefined in Federated Machine Learning. Machine Learning and Knowledge Extraction, 3(2), 333-356. https://doi.org/10.3390/make3020017

A common privacy issue in traditional machine learning is that data needs to be disclosed for the training procedures. In situations with highly sensitive data such as healthcare records, accessing this information is challenging and often prohibited... Read More about Privacy and Trust Redefined in Federated Machine Learning.

Privacy-preserving Surveillance Methods using Homomorphic Encryption (2020)
Conference Proceeding
Bowditch, W., Abramson, W., Buchanan, W. J., Pitropakis, N., & Hall, A. J. (2020). Privacy-preserving Surveillance Methods using Homomorphic Encryption. In ICISSP: Proceedings of the 6th International Conference on Information Systems Security and Privacy (240-248). https://doi.org/10.5220/0008864902400248

Data analysis and machine learning methods often involve the processing of cleartext data, and where this could breach the rights to privacy. Increasingly, we must use encryption to protect all states of the data: in-transit, at-rest, and in-memory.... Read More about Privacy-preserving Surveillance Methods using Homomorphic Encryption.

A Distributed Trust Framework for Privacy-Preserving Machine Learning (2020)
Conference Proceeding
Abramson, W., Hall, A. J., Papadopoulos, P., Pitropakis, N., & Buchanan, W. J. (2020). A Distributed Trust Framework for Privacy-Preserving Machine Learning. In Trust, Privacy and Security in Digital Business (205-220). https://doi.org/10.1007/978-3-030-58986-8_14

When training a machine learning model, it is standard procedure for the researcher to have full knowledge of both the data and model. However, this engenders a lack of trust between data owners and data scientists. Data owners are justifiably reluct... Read More about A Distributed Trust Framework for Privacy-Preserving Machine Learning.

Insider Threat Detection Using Supervised Machine Learning Algorithms on an Extremely Imbalanced Dataset (2020)
Journal Article
Moradpoor, N., & Hall, A. (2020). Insider Threat Detection Using Supervised Machine Learning Algorithms on an Extremely Imbalanced Dataset. International Journal of Cyber Warfare and Terrorism, 10(2), https://doi.org/10.4018/IJCWT.2020040101

An insider threat can take on many forms and fall under different categories. This includes: malicious insider, careless/unaware/uneducated/naïve employee, and third-party contractor. A malicious insider, which can be a criminal agent recruited as a... Read More about Insider Threat Detection Using Supervised Machine Learning Algorithms on an Extremely Imbalanced Dataset.

Predicting Malicious Insider Threat Scenarios Using Organizational Data and a Heterogeneous Stack-Classifier (2019)
Conference Proceeding
Hall, A. J., Pitropakis, N., Buchanan, W. J., & Moradpoor, N. (2019). Predicting Malicious Insider Threat Scenarios Using Organizational Data and a Heterogeneous Stack-Classifier. In 2018 IEEE International Conference on Big Data (Big Data). https://doi.org/10.1109/BigData.2018.8621922

Insider threats continue to present a major challenge for the information security community. Despite constant research taking place in this area; a substantial gap still exists between the requirements of this community and the solutions that are cu... Read More about Predicting Malicious Insider Threat Scenarios Using Organizational Data and a Heterogeneous Stack-Classifier.