Vikas Hassija
Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
Hassija, Vikas; Chamola, Vinay; Mahapatra, Atmesh; Singal, Abhinandan; Goel, Divyansh; Huang, Kaizhu; Scardapane, Simone; Spinelli, Indro; Mahmud, Mufti; Hussain, Amir
Authors
Vinay Chamola
Atmesh Mahapatra
Abhinandan Singal
Divyansh Goel
Kaizhu Huang
Simone Scardapane
Indro Spinelli
Mufti Mahmud
Prof Amir Hussain A.Hussain@napier.ac.uk
Professor
Abstract
Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy.
Citation
Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M., & Hussain, A. (2024). Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognitive Computation, 16(1), 45-74. https://doi.org/10.1007/s12559-023-10179-8
Journal Article Type | Article |
---|---|
Acceptance Date | Jul 10, 2023 |
Online Publication Date | Aug 24, 2023 |
Publication Date | Jan 1, 2024 |
Deposit Date | Jan 22, 2024 |
Publicly Available Date | Jan 22, 2024 |
Journal | Cognitive Computation |
Print ISSN | 1866-9956 |
Publisher | Springer |
Peer Reviewed | Peer Reviewed |
Volume | 16 |
Issue | 1 |
Pages | 45-74 |
DOI | https://doi.org/10.1007/s12559-023-10179-8 |
Keywords | Transparency, XAI, Black-box models, Interpretability, Responsible AI, Machine learning |
Public URL | http://researchrepository.napier.ac.uk/Output/3487848 |
Files
Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
(2.2 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
You might also like
MTFDN: An image copy‐move forgery detection method based on multi‐task learning
(2024)
Journal Article
Transition-aware human activity recognition using an ensemble deep learning framework
(2024)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search