Skip to main content

Research Repository

Advanced Search

CiViL: Common-sense- and Visual-enhanced natural Language generation

People Involved

Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge (2023)
Presentation / Conference Contribution
Watson, L., & Gkatzia, D. (2023, September). Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge. Presented at 3rd Workshop on Human Evaluation of NLP Systems (HumEval), Varna, Bulgaria

Human evaluation is crucial for NLG systems as it provides a reliable assessment of the quality, effectiveness, and utility of generated language outputs. However, concerns about the reproducibility of such evaluations have emerged, casting doubt on... Read More about Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge.