Skip to main content

Research Repository

Advanced Search

FMEA-AI: AI fairness impact assessment using failure mode and effects analysis

Li, Jamy; Chignell, Mark

Authors

Mark Chignell



Abstract

Recently, there has been a growing demand to address failures in the fairness of artificial intelligence (AI) systems. Current techniques for improving fairness in AI systems are focused on broad changes to the norms, procedures and algorithms used by companies that implement those systems. However, some organizations may require detailed methods to identify which user groups are disproportionately impacted by failures in specific components of their systems. Failure mode and effects analysis (FMEA) is a popular safety engineering method and is proposed here as a vehicle to support the conducting of “AI fairness impact assessments” in organizations. An extension to FMEA called “FMEA-AI” is proposed as a modification to a familiar tool for engineers and manufacturers that can integrate moral sensitivity and ethical considerations into a company’s existing design process. Whereas current impact assessments focus on helping regulators identify an aggregate risk level for an entire AI system, FMEA-AI helps companies identify safety and fairness risk in multiple failure modes of an AI system. It also explicitly identifies user groups and considers an objective definition of fairness as proportional satisfaction of claims in calculating likelihood and severity of fairness-related failures. This proposed method can help industry analysts adapt a widely known safety engineering method to incorporate AI fairness considerations, promote moral sensitivity and overcome resistance to change.

Citation

Li, J., & Chignell, M. (2022). FMEA-AI: AI fairness impact assessment using failure mode and effects analysis. AI and Ethics, 2(4), 837-850. https://doi.org/10.1007/s43681-022-00145-9

Journal Article Type Article
Acceptance Date Feb 21, 2022
Online Publication Date Mar 7, 2022
Publication Date 2022-11
Deposit Date Oct 22, 2024
Journal AI and Ethics
Electronic ISSN 2730-5961
Publisher Springer
Peer Reviewed Peer Reviewed
Volume 2
Issue 4
Pages 837-850
DOI https://doi.org/10.1007/s43681-022-00145-9
Keywords Failure mode and efects analysis, Risk analysis, Human rights impact assessments, Fair AI, Proportional satisfaction of claims