Dr Yanchao Yu Y.Yu@napier.ac.uk
Lecturer
We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor. The system integrates an incremental, semantic parsing/generation framework - Dynamic Syntax and Type Theory with Records (DS-TTR) - with a set of visual classifiers that are learned throughout the interaction and which ground the meaning representations that it produces. We use this system in interaction with a simulated human tutor to study the effects of different dialogue policies and capabilities on accuracy of learned meanings, learning rates, and efforts/costs to the tutor. We show that the overall performance of the learning agent is affected by (1) who takes initiative in the dialogues; (2) the ability to express/use their confidence level about visual attributes; and (3) the ability to process elliptical and incrementally constructed dialogue turns. Ultimately, we train an adaptive dialogue policy which optimises the trade-off between classifier accuracy and tutoring costs.
Yu, Y., Eshghi, A., & Lemon, O. (2016, September). Training an adaptive dialogue policy for interactive learning of visually grounded word meanings. Presented at 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Los Angeles, US
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue |
Start Date | Sep 13, 2016 |
End Date | Sep 15, 2016 |
Publication Date | 2016 |
Deposit Date | Jun 28, 2023 |
Publicly Available Date | Jun 28, 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 339-349 |
Book Title | Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue |
DOI | https://doi.org/10.18653/v1/w16-3643 |
Publisher URL | https://doi.org/10.18653/v1/w16-3643 |
Training an adaptive dialogue policy for interactive learning of visually grounded word meanings
(2.2 Mb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
The PARLANCE mobile application for interactive search in English and Mandarin
(2014)
Presentation / Conference Contribution
Comparing attribute classifiers for interactive language grounding
(2015)
Presentation / Conference Contribution
Interactively learning visually grounded word meanings from a human tutor
(2016)
Presentation / Conference Contribution
A comprehensive evaluation of incremental speech recognition and diarization for conversational AI
(2020)
Presentation / Conference Contribution
Incremental Generation of Visually Grounded Language in Situated Dialogue (demonstration system)
(2016)
Presentation / Conference Contribution
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search