Over the past few decades, Machine Learning (ML)-based intrusion detection systems (IDS) have become increasingly popular and continue to show remarkable performance in detecting attacks. However, the lack of transparency in their decision-making process and the scarcity of attack data for training purposes pose a major challenge for the development of ML-based IDS systems for Internet of Things (IoT). Therefore, employing anomaly detection methods and interpreting predicted results in terms of feature contribution or performing feature-based impact analysis can increase stakeholders confidence. To this end, this paper presents a novel framework for IoT security monitoring, combining deep autoencoder models with Explainable Artificial Intelligence (XAI), to verify the credibility and certainty of attack detection by ML-based IDSs. Our proposed approach reduces the number of black boxes in the ML decision-making process in IoT security monitoring by explaining why a prediction is made, providing quantifiable data on which features influence the prediction and to what extent, which are generated from SHaply Adaptive values exPlanations (SHAP) linking optimal credit allocation to local explanations. This was tested using the USB-IDS benchmark dataset and a detection accuracy of 84% (benign) and 100% (attack) was achieved. Our experimental results show that integrating XAI with the autoencoder model obviates the need of malicious data for training purposes, but can provide attack certainty for detected anomalies, proving the validity of the proposed methodology.
Sampath Kalutharage, C., Liu, X., & Chrysoulas, C. (2022). Explainable AI and Deep Autoencoders Based Security Framework for IoT Network Attack Certainty (Extended Abstract). In Attacks and Defenses for the Internet-of-Things: 5th International Workshop, ADIoT 2022 (41-50). https://doi.org/10.1007/978-3-031-21311-3_8