Saad Mahamood
It's Common Sense, isn't it? Demystifying Human Evaluations in Commonsense-enhanced NLG systems
Mahamood, Saad; Clinciu, Miruna; Gkatzia, Dimitra
Abstract
Common sense is an integral part of human cognition which allows us to make sound decisions , communicate effectively with others and interpret situations and utterances. Endowing AI systems with commonsense knowledge capabilities will help us get closer to creating systems that exhibit human intelligence. Recent efforts in Natural Language Generation (NLG) have focused on incorporating com-monsense knowledge through large-scale pre-trained language models or by incorporating external knowledge bases. Such systems exhibit reasoning capabilities without common sense being explicitly encoded in the training set. These systems require careful evaluation, as they incorporate additional resources during training which adds additional sources of errors. Additionally, human evaluation of such systems can have significant variation, making it impossible to compare different systems and define baselines. This paper aims to de-mystify human evaluations of commonsense-enhanced NLG systems by proposing the Com-monsense Evaluation Card (CEC), a set of recommendations for evaluation reporting of commonsense-enhanced NLG systems, underpinned by an extensive analysis of human evaluations reported in the recent literature.
Citation
Mahamood, S., Clinciu, M., & Gkatzia, D. (2021, April). It's Common Sense, isn't it? Demystifying Human Evaluations in Commonsense-enhanced NLG systems. Presented at Workshop on Human Evaluation of NLP Systems (HumEval at EACL 2021), Kyiv, Ukraine (online)
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | Workshop on Human Evaluation of NLP Systems (HumEval at EACL 2021) |
Start Date | Apr 19, 2021 |
End Date | Apr 19, 2021 |
Acceptance Date | Mar 22, 2021 |
Publication Date | 2021-04 |
Deposit Date | Apr 9, 2021 |
Publicly Available Date | Apr 9, 2021 |
Book Title | Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval) |
Public URL | http://researchrepository.napier.ac.uk/Output/2760100 |
Publisher URL | https://aclanthology.org/2021.humeval-1.1 |
Files
It’s Common Sense, isn’t it? Demystifying Human Evaluations in Commonsense-enhanced NLG systems (accepted version)
(193 Kb)
PDF
You might also like
Data-to-Text Generation Improves Decision-Making Under Uncertainty
(2017)
Journal Article
Multi-adaptive Natural Language Generation using Principal Component Regression
(2014)
Presentation / Conference Contribution
The REAL corpus
(2016)
Data
Monitoring Users’ Behavior: Anti-Immigration Speech Detection on Twitter
(2020)
Journal Article
Opportunities and risks in the use of AI in career development practice
(2022)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search