Skip to main content

Research Repository

Advanced Search

Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge

Watson, Lewis; Gkatzia, Dimitra

Authors



Abstract

Human evaluation is crucial for NLG systems as it provides a reliable assessment of the quality, effectiveness, and utility of generated language outputs. However, concerns about the reproducibility of such evaluations have emerged, casting doubt on the reliability and generalisability of reported results. In this paper, we present the findings of a reproducibility study on a data-to-text system, conducted under two conditions: (1) replicating the original setup as closely as possible with evaluators from AMT, and (2) replicating the original human evaluation but this time, utilising evaluators with a background in academia. Our experiments show that there is a loss of statistical significance between the original and reproduction studies, i.e. the human evaluation results are not reproducible. In addition, we found that employing local participants led to more robust results. We finally discuss lessons learned, addressing the challenges and best practices for ensuring reproducibility in NLG human evaluations.

Presentation Conference Type Conference Paper (Published)
Conference Name 3rd Workshop on Human Evaluation of NLP Systems (HumEval)
Publication Date 2023
Deposit Date Feb 6, 2024
Publicly Available Date Feb 6, 2024
Publisher Association for Computational Linguistics (ACL)
Pages 69-74
Book Title Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems
Public URL http://researchrepository.napier.ac.uk/Output/3496961
Publisher URL https://aclanthology.org/2023.humeval-1.6/

Files




You might also like



Downloadable Citations