Iqbal H. Sarker
Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects
Sarker, Iqbal H.; Janicke, Helge; Mohsin, Ahmad; Gill, Asif; Maglaras, Leandros
Authors
Abstract
Digital twins (DTs) are an emerging digitalization technology with a huge impact on today’s innovations in both industry and research. DTs can significantly enhance our society and quality of life through the virtualization of a real-world physical system, providing greater insights about their operations and assets, as well as enhancing their resilience through real-time monitoring and proactive maintenance. DTs also pose significant security risks, as intellectual property is encoded and more accessible, as well as their continued synchronization to their physical counterparts. The rapid proliferation and dynamism of cyber threats in today’s digital environments motivate the development of automated and intelligent cyber solutions. Today’s industrial transformation relies heavily on artificial intelligence (AI), including machine learning (ML) and data-driven technologies that allow machines to perform tasks such as self-monitoring, investigation, diagnosis, future prediction, and decision-making intelligently. However, to effectively employ AI-based models in the context of cybersecurity, human-understandable explanations, and their trustworthiness, are significant factors when making decisions in real-world scenarios. This article provides an extensive study of explainable AI (XAI) based cybersecurity modeling through a taxonomy of AI and XAI methods that can assist security analysts and professionals in comprehending system functions, identifying potential threats and anomalies, and ultimately addressing them in DT environments in an intelligent manner. We discuss how these methods can play a key role in solving contemporary cybersecurity issues in various real-world applications. We conclude this paper by identifying crucial challenges and avenues for further research, as well as directions on how professionals and researchers might approach and model future-generation cybersecurity in this emerging field.
Citation
Sarker, I. H., Janicke, H., Mohsin, A., Gill, A., & Maglaras, L. (online). Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects. ICT Express, https://doi.org/10.1016/j.icte.2024.05.007
Journal Article Type | Article |
---|---|
Acceptance Date | May 18, 2024 |
Online Publication Date | May 21, 2024 |
Deposit Date | May 24, 2024 |
Publicly Available Date | May 27, 2024 |
Electronic ISSN | 2405-9595 |
Publisher | Elsevier |
Peer Reviewed | Peer Reviewed |
DOI | https://doi.org/10.1016/j.icte.2024.05.007 |
Keywords | Cybersecurity; Explainable AI; Machine learning; Data-driven; Automation; Intelligent decision-making; Trustworthiness; Digital twin |
Files
Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects (proof)
(1.9 Mb)
PDF
You might also like
RF Jamming Classification Using Relative Speed Estimation in Vehicular Wireless Networks
(2021)
Journal Article
Internet of drones security: taxonomies, open issues, and future directions
(2022)
Journal Article
MIMO Techniques for Jamming Threat Suppression in Vehicular Networks
(2016)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search