Exploring the impact of data representation on neural data-to-text generation
(2024)
Presentation / Conference Contribution
Howcroft, D. M., Watson, L. N., Nedopas, O., & Gkatzia, D. (2024, September). Exploring the impact of data representation on neural data-to-text generation. Poster presented at INLG 2024, Tokyo, Japan
All Outputs (3)
Reproducing Human Evaluation of Meaning Preservation in Paraphrase Generation (2024)
Presentation / Conference Contribution
Watson, L. N., & Gkatzia, D. (2024, May). Reproducing Human Evaluation of Meaning Preservation in Paraphrase Generation. Presented at HumEval2024 at LREC-COLING 2024, Turin, ItalyReproducibility is a cornerstone of scientific research, ensuring the reliability and generalisability of findings. The ReproNLP Shared Task on Reproducibility of Evaluations in NLP aims to assess the reproducibility of human evaluation studies. This... Read More about Reproducing Human Evaluation of Meaning Preservation in Paraphrase Generation.
Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge (2023)
Presentation / Conference Contribution
Watson, L., & Gkatzia, D. (2023). Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge. In Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems (69-74)Human evaluation is crucial for NLG systems as it provides a reliable assessment of the quality, effectiveness, and utility of generated language outputs. However, concerns about the reproducibility of such evaluations have emerged, casting doubt on... Read More about Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge.