Skip to main content

Research Repository

Advanced Search

All Outputs (1)

Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge (2023)
Conference Proceeding
Watson, L., & Gkatzia, D. (2023). Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge. In Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems (69-74)

Human evaluation is crucial for NLG systems as it provides a reliable assessment of the quality, effectiveness, and utility of generated language outputs. However, concerns about the reproducibility of such evaluations have emerged, casting doubt on... Read More about Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge.