Skip to main content

Research Repository

Advanced Search

Rigorous Graphical Explainable AI for Higher-Risk Applications

Project Description

Deep Neural Networks (DNNs) are common in many areas, such as human‐robot interaction (HRI), natural language processing, and planning tasks, however, their use in high-risk industrial applications is limited by: 1. their lack of interpretability and explainability; 2. their unquantified robustness; and 3. their requirements of large amounts of training data.

This project will use newly developed combinations of advanced classical robust signal processing and mathematical techniques to provide explainable machine learning alternatives to common DNN tasks. This approach will make modelling assumptions explicit, while using a fraction of the variables (typically tens of millions) used with DNNs. This can greatly increase training efficiency, aid explainability, and improve robustness.

We would evaluate and demonstrate the approach on a question/answer application using Equinor’s Volve Data repository which contains over 3 TB of oil field reports.

The goals of this project are to:
1. Show that the approach achieves comparable results to state-of-the-art DNN implementations, especially with ‘smaller’ data sets.
2. Provide an example data-driven, graphical explanation system that helps implementors understand the operation and limitations of the approach.
3. Show that the above increases developers and investors confidence in these techniques.

Type of Project P03 - Research Councils
Status Project Complete
Funder(s) Engineering and Physical Sciences Research Council
Value £49,088.00
Project Dates Mar 1, 2020 - Aug 13, 2021