Skip to main content

Research Repository

Advanced Search

Training an adaptive dialogue policy for interactive learning of visually grounded word meanings

Yu, Yanchao; Eshghi, Arash; Lemon, Oliver

Authors

Arash Eshghi

Oliver Lemon



Abstract

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor. The system integrates an incremental, semantic parsing/generation framework - Dynamic Syntax and Type Theory with Records (DS-TTR) - with a set of visual classifiers that are learned throughout the interaction and which ground the meaning representations that it produces. We use this system in interaction with a simulated human tutor to study the effects of different dialogue policies and capabilities on accuracy of learned meanings, learning rates, and efforts/costs to the tutor. We show that the overall performance of the learning agent is affected by (1) who takes initiative in the dialogues; (2) the ability to express/use their confidence level about visual attributes; and (3) the ability to process elliptical and incrementally constructed dialogue turns. Ultimately, we train an adaptive dialogue policy which optimises the trade-off between classifier accuracy and tutoring costs.

Presentation Conference Type Conference Paper (Published)
Conference Name 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Start Date Sep 13, 2016
End Date Sep 15, 2016
Publication Date 2016
Deposit Date Jun 28, 2023
Publicly Available Date Jun 28, 2023
Publisher Association for Computational Linguistics (ACL)
Pages 339-349
Book Title Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
DOI https://doi.org/10.18653/v1/w16-3643
Publisher URL https://doi.org/10.18653/v1/w16-3643

Files




You might also like



Downloadable Citations