Skip to main content

Research Repository

Advanced Search

Outputs (32)

From documents to dialogue: Context matters in common sense-enhanced task-based dialogue grounded in documents (2025)
Journal Article
Strathearn, C., Gkatzia, D., & Yu, Y. (2025). From documents to dialogue: Context matters in common sense-enhanced task-based dialogue grounded in documents. Expert Systems with Applications, 279, Article 127304. https://doi.org/10.1016/j.eswa.2025.127304

Humans can engage in a conversation to collaborate on multi-step tasks and divert briefly to complete essential sub-tasks, such as asking for confirmation or clarification, before resuming the overall task. This communication is necessary as some kno... Read More about From documents to dialogue: Context matters in common sense-enhanced task-based dialogue grounded in documents.

How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction (2024)
Presentation / Conference Contribution
Orme, M., Yu, Y., & Tan, Z. (2024, May). How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction. Presented at The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy

This paper concerns the pressing need to understand and manage inappropriate language within the evolving human-robot interaction (HRI) landscape. As intelligent systems and robots transition from controlled laboratory settings to everyday households... Read More about How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction.

TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction (2023)
Presentation / Conference Contribution
Strathearn, C., Yu, Y., & Gkatzia, D. (2023, March). TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction. Presented at 'HRCI23, Stockholm, Sweden

The most effective way of communication between humans and robots is through natural language communication. However, there are many challenges to overcome before robots can effectively converse in order to collaborate and work together with humans.... Read More about TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction.

MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit (2023)
Presentation / Conference Contribution
Yu, Y., & Oduronbi, D. (2023, July). MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit. Presented at CUI '23: ACM conference on Conversational User Interfaces, Eindhoven, Netherlands

We present a modular dialogue experiments and demonstration toolkit (MoDEsT) that assists researchers in planning tailored conversational AI-related studies. The platform can: 1) assist users in picking multiple templates based on specific task needs... Read More about MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit.

The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues (2022)
Presentation / Conference Contribution
Jiang, Y., Xu, Y., Zhan, Y., He, W., Wang, Y., Xi, Z., Wang, M., Li, X., Li, Y., & Yu, Y. (2022, June). The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues. Presented at Thirteenth Language Resources and Evaluation Conference, Marseille, France

We describe a new freely available Chinese multi-party dialogue dataset for automatic extraction of dialogue-based character relationships. The data has been extracted from the original TV scripts of a Chinese sitcom called “I Love My Home” with comp... Read More about The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues.

A Visually-Aware Conversational Robot Receptionist (2022)
Presentation / Conference Contribution
Gunson, N., Garcia, D. H., Sieińska, W., Addlesee, A., Dondrup, C., Lemon, O., Part, J. L., & Yu, Y. (2022, September). A Visually-Aware Conversational Robot Receptionist. Presented at 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Edinburgh

Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities. We will demonstrate a robot receptionist t... Read More about A Visually-Aware Conversational Robot Receptionist.

Combining Visual and Social Dialogue for Human-Robot Interaction (2021)
Presentation / Conference Contribution
Gunson, N., Hernandez Garcia, D., Part, J. L., Yu, Y., Sieińska, W., Dondrup, C., & Lemon, O. (2021, October). Combining Visual and Social Dialogue for Human-Robot Interaction. Presented at 2021 International Conference on Multimodal Interaction, Montréal, QC, Canada

We will demonstrate a prototype multimodal conversational AI system that will act as a receptionist in a hospital waiting room, combining visually-grounded dialogue with social conversation. The system supports visual object conversation in the waiti... Read More about Combining Visual and Social Dialogue for Human-Robot Interaction.

Coronabot: A conversational ai system for tackling misinformation (2021)
Presentation / Conference Contribution
Gunson, N., Sieińska, W., Yu, Y., Hernandez Garcia, D., Part, J. L., Dondrup, C., & Lemon, O. (2021, September). Coronabot: A conversational ai system for tackling misinformation. Presented at Conference on Information Technology for Social Good, Roma, Italy

Covid-19 has brought with it an onslaught of information for the public, some true and some false, across virtually every platform. For an individual, the task of sifting through the deluge for reliable, accurate facts is significant and potentially... Read More about Coronabot: A conversational ai system for tackling misinformation.

Towards visual dialogue for human-robot interaction (2021)
Presentation / Conference Contribution
Part, J. L., Hernández García, D., Yu, Y., Gunson, N., Dondrup, C., & Lemon, O. (2021, March). Towards visual dialogue for human-robot interaction. Presented at HRI '21: ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA

The goal of the EU H2020-ICT funded SPRING project is to develop a socially pertinent robot to carry out tasks in a gerontological healthcare unit. In this context, being able to perceive its environment and have coherent and relevant conversations a... Read More about Towards visual dialogue for human-robot interaction.

A comprehensive evaluation of incremental speech recognition and diarization for conversational AI (2020)
Presentation / Conference Contribution
Addlesee, A., Yu, Y., & Eshghi, A. (2020, December). A comprehensive evaluation of incremental speech recognition and diarization for conversational AI. Presented at 28th International Conference on Computational Linguistics, Barcelona, Spain (Online)

Automatic Speech Recognition (ASR) systems are increasingly powerful and more accurate, but also more numerous with several options existing currently as a service (e.g. Google, IBM, and Microsoft). Currently the most stringent standards for such sys... Read More about A comprehensive evaluation of incremental speech recognition and diarization for conversational AI.