Skip to main content

Research Repository

Advanced Search

Learning how to learn: An adaptive dialogue agent for incrementally learning visually grounded word meanings

Yu, Yanchao; Eshghi, Arash; Lemon, Oliver

Authors

Arash Eshghi

Oliver Lemon



Abstract

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Reinforcement Learning (RL), must be able to handle natural conversations with human users, and achieve good learning performance (i.e. accuracy) while minimising human effort in the learning process. We train and evaluate this system in interaction with a simulated human tutor, which is built on the BURCHAK corpus – a Human-Human Dialogue dataset for the visual learning task. The results show that: 1) The learned policy can coherently interact with the simulated user to achieve the goal of the task (i.e. learning visual attributes of objects, e.g. colour and shape); and 2) it finds a better trade-off between classifier accuracy and tutoring costs than hand-crafted rule-based policies, including ones with dynamic policies.

Presentation Conference Type Conference Paper (Published)
Conference Name First Workshop on Language Grounding for Robotics
Start Date Jul 30, 2017
End Date Aug 4, 2017
Publication Date 2017-08
Deposit Date Jun 28, 2023
Publicly Available Date Jun 28, 2023
Publisher Association for Computational Linguistics (ACL)
Book Title Proceedings of the First Workshop on Language Grounding for Robotics
Publisher URL https://aclanthology.org/W17-2802/

Files




You might also like



Downloadable Citations