Skip to main content

Research Repository

Advanced Search

Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects

Sarker, Iqbal H.; Janicke, Helge; Mohsin, Ahmad; Gill, Asif; Maglaras, Leandros

Authors

Iqbal H. Sarker

Helge Janicke

Ahmad Mohsin

Asif Gill



Abstract

Digital twins (DTs) are an emerging digitalization technology with a huge impact on today’s innovations in both industry and research. DTs can significantly enhance our society and quality of life through the virtualization of a real-world physical system, providing greater insights about their operations and assets, as well as enhancing their resilience through real-time monitoring and proactive maintenance. DTs also pose significant security risks, as intellectual property is encoded and more accessible, as well as their continued synchronization to their physical counterparts. The rapid proliferation and dynamism of cyber threats in today’s digital environments motivate the development of automated and intelligent cyber solutions. Today’s industrial transformation relies heavily on artificial intelligence (AI), including machine learning (ML) and data-driven technologies that allow machines to perform tasks such as self-monitoring, investigation, diagnosis, future prediction, and decision-making intelligently. However, to effectively employ AI-based models in the context of cybersecurity, human-understandable explanations, and their trustworthiness, are significant factors when making decisions in real-world scenarios. This article provides an extensive study of explainable AI (XAI) based cybersecurity modeling through a taxonomy of AI and XAI methods that can assist security analysts and professionals in comprehending system functions, identifying potential threats and anomalies, and ultimately addressing them in DT environments in an intelligent manner. We discuss how these methods can play a key role in solving contemporary cybersecurity issues in various real-world applications. We conclude this paper by identifying crucial challenges and avenues for further research, as well as directions on how professionals and researchers might approach and model future-generation cybersecurity in this emerging field.

Citation

Sarker, I. H., Janicke, H., Mohsin, A., Gill, A., & Maglaras, L. (online). Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects. ICT Express, https://doi.org/10.1016/j.icte.2024.05.007

Journal Article Type Article
Acceptance Date May 18, 2024
Online Publication Date May 21, 2024
Deposit Date May 24, 2024
Publicly Available Date May 27, 2024
Electronic ISSN 2405-9595
Publisher Elsevier
Peer Reviewed Peer Reviewed
DOI https://doi.org/10.1016/j.icte.2024.05.007
Keywords Cybersecurity; Explainable AI; Machine learning; Data-driven; Automation; Intelligent decision-making; Trustworthiness; Digital twin

Files





You might also like



Downloadable Citations