Dr Jamy Li J.Li3@napier.ac.uk
Associate Professor
FMEA-AI: AI fairness impact assessment using failure mode and effects analysis
Li, Jamy; Chignell, Mark
Authors
Mark Chignell
Abstract
Recently, there has been a growing demand to address failures in the fairness of artificial intelligence (AI) systems. Current techniques for improving fairness in AI systems are focused on broad changes to the norms, procedures and algorithms used by companies that implement those systems. However, some organizations may require detailed methods to identify which user groups are disproportionately impacted by failures in specific components of their systems. Failure mode and effects analysis (FMEA) is a popular safety engineering method and is proposed here as a vehicle to support the conducting of “AI fairness impact assessments” in organizations. An extension to FMEA called “FMEA-AI” is proposed as a modification to a familiar tool for engineers and manufacturers that can integrate moral sensitivity and ethical considerations into a company’s existing design process. Whereas current impact assessments focus on helping regulators identify an aggregate risk level for an entire AI system, FMEA-AI helps companies identify safety and fairness risk in multiple failure modes of an AI system. It also explicitly identifies user groups and considers an objective definition of fairness as proportional satisfaction of claims in calculating likelihood and severity of fairness-related failures. This proposed method can help industry analysts adapt a widely known safety engineering method to incorporate AI fairness considerations, promote moral sensitivity and overcome resistance to change.
Citation
Li, J., & Chignell, M. (2022). FMEA-AI: AI fairness impact assessment using failure mode and effects analysis. AI and Ethics, 2(4), 837-850. https://doi.org/10.1007/s43681-022-00145-9
Journal Article Type | Article |
---|---|
Acceptance Date | Feb 21, 2022 |
Online Publication Date | Mar 7, 2022 |
Publication Date | 2022-11 |
Deposit Date | Oct 22, 2024 |
Journal | AI and Ethics |
Electronic ISSN | 2730-5961 |
Publisher | Springer |
Peer Reviewed | Peer Reviewed |
Volume | 2 |
Issue | 4 |
Pages | 837-850 |
DOI | https://doi.org/10.1007/s43681-022-00145-9 |
Keywords | Failure mode and efects analysis, Risk analysis, Human rights impact assessments, Fair AI, Proportional satisfaction of claims |
You might also like
Public opinion on types of voice systems for older adults
(2024)
Journal Article
Resolving Facility Layout Issues in an Ontario Bakery Using CRAFT with Numerous Departments and Probabilistic Rack Movement
(2024)
Presentation / Conference Contribution
Double Trouble: The Effect of Eye Gaze on the Social Impression of Mobile Robotic Telepresence Operators
(2020)
Presentation / Conference Contribution
Design of a Social Media Voice Assistant for Older Adults
(2023)
Presentation / Conference Contribution
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search