Skip to main content

Research Repository

Advanced Search

CiViL: Common-sense- and Visual-enhanced natural Language generation

People Involved

TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction (2023)
Presentation / Conference Contribution
Strathearn, C., Yu, Y., & Gkatzia, D. (2023, March). TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction. Presented at 'HRCI23, Stockholm, Sweden

The most effective way of communication between humans and robots is through natural language communication. However, there are many challenges to overcome before robots can effectively converse in order to collaborate and work together with humans.... Read More about TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction.

Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge (2023)
Presentation / Conference Contribution
Watson, L., & Gkatzia, D. (2023, September). Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge. Presented at 3rd Workshop on Human Evaluation of NLP Systems (HumEval), Varna, Bulgaria

Human evaluation is crucial for NLG systems as it provides a reliable assessment of the quality, effectiveness, and utility of generated language outputs. However, concerns about the reproducibility of such evaluations have emerged, casting doubt on... Read More about Unveiling NLG Human-Evaluation Reproducibility: Lessons Learned and Key Insights from Participating in the ReproNLP Challenge.

Barriers and enabling factors for error analysis in NLG research (2023)
Journal Article
Van Miltenburg, E., Clinciu, M., Dušek, O., Gkatzia, D., Inglis, S., Leppänen, L., Mahamood, S., Schoch, S., Thomson, C., & Wen, L. (2023). Barriers and enabling factors for error analysis in NLG research. Northern European Journal of Language Technology, 9(1), https://doi.org/10.3384/nejlt.2000-1533.2023.4529

Earlier research has shown that few studies in Natural Language Generation (NLG) evaluate their system outputs using an error analysis, despite known limitations of automatic evaluation metrics and human ratings. This position paper takes the stance... Read More about Barriers and enabling factors for error analysis in NLG research.