Skip to main content

Research Repository

Advanced Search

VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system)

Yu, Yanchao; Eshghi, Arash; Lemon, Oliver

Authors

Arash Eshghi

Oliver Lemon



Abstract

We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user. VOILA is: (1) able to learn new visual categories interactively from users from scratch; (2) trained on real human-human dialogues in the same domain, and so is able to conduct natural spontaneous dialogue; (3) optimised to find the most effective trade-off between the accuracy of the visual categories it learns and the cost it incurs to users. VOILA is deployed on Furhat, a human-like, multi-modal robot head with back-projection of the face, and a graphical virtual character.

Citation

Yu, Y., Eshghi, A., & Lemon, O. (2017, August). VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system). Presented at 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany

Presentation Conference Type Conference Paper (published)
Conference Name 18th Annual SIGdial Meeting on Discourse and Dialogue
Start Date Aug 15, 2017
Publication Date 2017
Deposit Date Jun 28, 2023
Publicly Available Date Jun 28, 2023
Publisher Association for Computational Linguistics (ACL)
Pages 197-200
Book Title Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
Related Public URLs https://aclanthology.org/W17-5524/

Files





You might also like



Downloadable Citations