Skip to main content

Research Repository

Advanced Search

All Outputs (31)

How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction (2024)
Presentation / Conference Contribution
Orme, M., Yu, Y., & Tan, Z. (2024, May). How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction. Presented at The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy

This paper concerns the pressing need to understand and manage inappropriate language within the evolving human-robot interaction (HRI) landscape. As intelligent systems and robots transition from controlled laboratory settings to everyday households... Read More about How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction.

TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction (2023)
Presentation / Conference Contribution
Strathearn, C., Yu, Y., & Gkatzia, D. (2023, March). TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction. Presented at 'HRCI23, Stockholm, Sweden

The most effective way of communication between humans and robots is through natural language communication. However, there are many challenges to overcome before robots can effectively converse in order to collaborate and work together with humans.... Read More about TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction.

MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit (2023)
Presentation / Conference Contribution
Yu, Y., & Oduronbi, D. (2023, July). MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit. Presented at CUI '23: ACM conference on Conversational User Interfaces, Eindhoven, Netherlands

We present a modular dialogue experiments and demonstration toolkit (MoDEsT) that assists researchers in planning tailored conversational AI-related studies. The platform can: 1) assist users in picking multiple templates based on specific task needs... Read More about MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit.

The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues (2022)
Presentation / Conference Contribution
Jiang, Y., Xu, Y., Zhan, Y., He, W., Wang, Y., Xi, Z., Wang, M., Li, X., Li, Y., & Yu, Y. (2022, June). The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues. Presented at Thirteenth Language Resources and Evaluation Conference, Marseille, France

We describe a new freely available Chinese multi-party dialogue dataset for automatic extraction of dialogue-based character relationships. The data has been extracted from the original TV scripts of a Chinese sitcom called “I Love My Home” with comp... Read More about The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues.

A Visually-Aware Conversational Robot Receptionist (2022)
Presentation / Conference Contribution
Gunson, N., Garcia, D. H., Sieińska, W., Addlesee, A., Dondrup, C., Lemon, O., Part, J. L., & Yu, Y. (2022, September). A Visually-Aware Conversational Robot Receptionist. Presented at 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Edinburgh

Socially Assistive Robots (SARs) have the potential to play an increasingly important role in a variety of contexts including healthcare, but most existing systems have very limited interactive capabilities. We will demonstrate a robot receptionist t... Read More about A Visually-Aware Conversational Robot Receptionist.

Combining Visual and Social Dialogue for Human-Robot Interaction (2021)
Presentation / Conference Contribution
Gunson, N., Hernandez Garcia, D., Part, J. L., Yu, Y., Sieińska, W., Dondrup, C., & Lemon, O. (2021, October). Combining Visual and Social Dialogue for Human-Robot Interaction. Presented at 2021 International Conference on Multimodal Interaction, Montréal, QC, Canada

We will demonstrate a prototype multimodal conversational AI system that will act as a receptionist in a hospital waiting room, combining visually-grounded dialogue with social conversation. The system supports visual object conversation in the waiti... Read More about Combining Visual and Social Dialogue for Human-Robot Interaction.

Coronabot: A conversational ai system for tackling misinformation (2021)
Presentation / Conference Contribution
Gunson, N., Sieińska, W., Yu, Y., Hernandez Garcia, D., Part, J. L., Dondrup, C., & Lemon, O. (2021, September). Coronabot: A conversational ai system for tackling misinformation. Presented at Conference on Information Technology for Social Good, Roma, Italy

Covid-19 has brought with it an onslaught of information for the public, some true and some false, across virtually every platform. For an individual, the task of sifting through the deluge for reliable, accurate facts is significant and potentially... Read More about Coronabot: A conversational ai system for tackling misinformation.

Towards visual dialogue for human-robot interaction (2021)
Presentation / Conference Contribution
Part, J. L., Hernández García, D., Yu, Y., Gunson, N., Dondrup, C., & Lemon, O. (2021, March). Towards visual dialogue for human-robot interaction. Presented at HRI '21: ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA

The goal of the EU H2020-ICT funded SPRING project is to develop a socially pertinent robot to carry out tasks in a gerontological healthcare unit. In this context, being able to perceive its environment and have coherent and relevant conversations a... Read More about Towards visual dialogue for human-robot interaction.

A comprehensive evaluation of incremental speech recognition and diarization for conversational AI (2020)
Presentation / Conference Contribution
Addlesee, A., Yu, Y., & Eshghi, A. (2020, December). A comprehensive evaluation of incremental speech recognition and diarization for conversational AI. Presented at 28th International Conference on Computational Linguistics, Barcelona, Spain (Online)

Automatic Speech Recognition (ASR) systems are increasingly powerful and more accurate, but also more numerous with several options existing currently as a service (e.g. Google, IBM, and Microsoft). Currently the most stringent standards for such sys... Read More about A comprehensive evaluation of incremental speech recognition and diarization for conversational AI.

Optimising strategies for learning visually grounded word meanings through interaction (2018)
Thesis
Yu, Y. Optimising strategies for learning visually grounded word meanings through interaction. (Thesis)

Language Grounding is a fundamental problem in AI, regarding how symbols in Natural Language (e.g. words and phrases) refer to aspects of the physical environment (e.g. ob jects and attributes). In this thesis, our ultimate goal is to address an inte... Read More about Optimising strategies for learning visually grounded word meanings through interaction.

Incrementally learning semantic attributes through dialogue interaction (2018)
Presentation / Conference Contribution
Vanzo, A., Part, J. L., Yu, Y., Nardi, D., & Lemon, O. (2018, July). Incrementally learning semantic attributes through dialogue interaction. Presented at AAMAS '18: 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden

Enabling a robot to properly interact with users plays a key role in the effective deployment of robotic platforms in domestic environments. Robots must be able to rely on interaction to improve their behaviour and adaptively understand their operati... Read More about Incrementally learning semantic attributes through dialogue interaction.

VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system) (2017)
Presentation / Conference Contribution
Yu, Y., Eshghi, A., & Lemon, O. (2017, August). VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system). Presented at 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany

We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user. VOILA is: (1) able to learn new visual categories interactively from users from scratch; (2) trained on real hum... Read More about VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system).

The BURCHAK corpus: A challenge data set for interactive learning of visually grounded word meanings (2017)
Presentation / Conference Contribution
Yu, Y., Eshghi, A., Mills, G., & Lemon, O. J. (2017, April). The BURCHAK corpus: A challenge data set for interactive learning of visually grounded word meanings. Presented at The Sixth Workshop on Vision and Language, Valencia, Spain

We motivate and describe a new freely available human-human dialogue data set for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-ch... Read More about The BURCHAK corpus: A challenge data set for interactive learning of visually grounded word meanings.

Alana: Social dialogue using an ensemble model and a ranker trained on user feedback (2017)
Presentation / Conference Contribution
Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., Dušek, O., Rieser, V., & Lemon, O. (2017, December). Alana: Social dialogue using an ensemble model and a ranker trained on user feedback. Presented at Alexa Prize SocialBot Grand Challenge 1

We describe our Alexa prize system (called ‘Alana’) which consists of an ensemble of bots, combining rule-based and machine learning systems, and using a contextual ranking mechanism to choose system responses. This paper reports on the version of th... Read More about Alana: Social dialogue using an ensemble model and a ranker trained on user feedback.

An ensemble model with ranking for social dialogue (2017)
Presentation / Conference Contribution
Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., Dušek, O., Rieser, V., & Lemon, O. (2017, December). An ensemble model with ranking for social dialogue. Paper presented at NIPS 2017 Conversational AI Workshop, Long Beach, US

Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence. This year, the Amazon Alexa Prize challenge was announced for the first time, where real customers get to rate systems developed by leading universities worldwi... Read More about An ensemble model with ranking for social dialogue.

Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings (2017)
Presentation / Conference Contribution
Yu, Y., Eshghi, A., & Lemon, O. (2017, July). Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings. Presented at First Workshop on Language Grounding for Robotics, Vancouver, Canada

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Rei... Read More about Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings.

Training an adaptive dialogue policy for interactive learning of visually grounded word meanings (2016)
Presentation / Conference Contribution
Yu, Y., Eshghi, A., & Lemon, O. (2016, September). Training an adaptive dialogue policy for interactive learning of visually grounded word meanings. Presented at 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Los Angeles, US

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor. The system integrates an incremental, semantic parsing/generation framework - Dynamic Syntax and Type Theory with Records (DS... Read More about Training an adaptive dialogue policy for interactive learning of visually grounded word meanings.

An Incremental Dialogue System for Learning Visually Grounded Language (demonstration system) (2016)
Presentation / Conference Contribution
Yu, Y., Eshghi, A., & Lemon, O. (2016, July). An Incremental Dialogue System for Learning Visually Grounded Language (demonstration system). Presented at 20th Workshop Series on the Semantics and Pragmatics of Dialogue 2016, New Brunswick, US

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor. The system integrates an incremental, semantic, and bi-directional grammar framework – Dynamic Syntax and Type Theory with Re... Read More about An Incremental Dialogue System for Learning Visually Grounded Language (demonstration system).