Dr Yanchao Yu Y.Yu@napier.ac.uk
Lecturer
VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system)
Yu, Yanchao; Eshghi, Arash; Lemon, Oliver
Authors
Arash Eshghi
Oliver Lemon
Abstract
We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user. VOILA is: (1) able to learn new visual categories interactively from users from scratch; (2) trained on real human-human dialogues in the same domain, and so is able to conduct natural spontaneous dialogue; (3) optimised to find the most effective trade-off between the accuracy of the visual categories it learns and the cost it incurs to users. VOILA is deployed on Furhat, a human-like, multi-modal robot head with back-projection of the face, and a graphical virtual character.
Citation
Yu, Y., Eshghi, A., & Lemon, O. (2017). VOILA: An optimised dialogue system for interactively learning visually-grounded word meanings (demonstration system). In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue (197-200)
Conference Name | 18th Annual SIGdial Meeting on Discourse and Dialogue |
---|---|
Conference Location | Saarbrücken, Germany |
Start Date | Aug 15, 2017 |
Publication Date | 2017 |
Deposit Date | Jun 28, 2023 |
Publicly Available Date | Jun 28, 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 197-200 |
Book Title | Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue |
Related Public URLs | https://aclanthology.org/W17-5524/ |
Files
VOILA: An Optimised Dialogue System For Interactively Learning Visually-grounded Word Meanings (demonstration System)
(2 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
You might also like
TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction
(2023)
Conference Proceeding
MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit
(2023)
Conference Proceeding
A Visually-Aware Conversational Robot Receptionist
(2022)
Conference Proceeding
The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues
(2022)
Conference Proceeding
Combining Visual and Social Dialogue for Human-Robot Interaction
(2021)
Conference Proceeding
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search