Skip to main content

Research Repository

Advanced Search

All Outputs (5)

VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system) (2017)
Conference Proceeding
Yu, Y., Eshghi, A., & Lemon, O. (2017). VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system). In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue (197-200)

We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user. VOILA is: (1) able to learn new visual categories interactively from users from scratch; (2) trained on real hum... Read More about VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system).

Alana: Social dialogue using an ensemble model and a ranker trained on user feedback (2017)
Conference Proceeding
Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., …Lemon, O. (2017). Alana: Social dialogue using an ensemble model and a ranker trained on user feedback. In 1st Proceedings of Alexa Prize

We describe our Alexa prize system (called ‘Alana’) which consists of an ensemble of bots, combining rule-based and machine learning systems, and using a contextual ranking mechanism to choose system responses. This paper reports on the version of th... Read More about Alana: Social dialogue using an ensemble model and a ranker trained on user feedback.

The BURCHAK corpus: A challenge data set for interactive learning of visually grounded word meanings (2017)
Conference Proceeding
Yu, Y., Eshghi, A., Mills, G., & Lemon, O. J. (2017). The BURCHAK corpus: A challenge data set for interactive learning of visually grounded word meanings. In Proceedings of the Sixth Workshop on Vision and Language

We motivate and describe a new freely available human-human dialogue data set for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-ch... Read More about The BURCHAK corpus: A challenge data set for interactive learning of visually grounded word meanings.

An ensemble model with ranking for social dialogue (2017)
Presentation / Conference
Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., …Lemon, O. (2017, December). An ensemble model with ranking for social dialogue. Paper presented at NIPS 2017 Conversational AI Workshop, Long Beach, US

Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence. This year, the Amazon Alexa Prize challenge was announced for the first time, where real customers get to rate systems developed by leading universities worldwi... Read More about An ensemble model with ranking for social dialogue.

Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings (2017)
Conference Proceeding
Yu, Y., Eshghi, A., & Lemon, O. (2017). Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings. In Proceedings of the First Workshop on Language Grounding for Robotics

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Rei... Read More about Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings.