Dr Yanchao Yu Y.Yu@napier.ac.uk
Lecturer
Comparing attribute classifiers for interactive language grounding
Yu, Yanchao; Eshghi, Arash; Lemon, Oliver
Authors
Arash Eshghi
Oliver Lemon
Abstract
We address the problem of interactively learning perceptually grounded word meanings in a multimodal dialogue system. We design a semantic and visual processing system to support this and illustrate how they can be integrated. We then focus on comparing the performance (Precision, Recall, F1, AUC) of three state-of-the-art attribute classifiers for the purpose of interactive language grounding (MLKNN, DAP, and SVMs), on the aPascal-aYahoo datasets. In prior work, results were presented for object classification using these methods for attribute labelling, whereas we focus on their performance for attribute labelling itself. We find that while these methods can perform well for some of the attributes (e.g. head, ears, furry) none of these models has good performance over the whole attribute set, and none supports incremental learning. This leads us to suggest directions for future work.
Citation
Yu, Y., Eshghi, A., & Lemon, O. (2015). Comparing attribute classifiers for interactive language grounding. In Proceedings of the Fourth Workshop on Vision and Language (60-69)
Presentation Conference Type | Conference Paper (Published) |
---|---|
Conference Name | Fourth Workshop on Vision and Language |
Start Date | Sep 18, 2015 |
Publication Date | 2015-09 |
Deposit Date | Jun 28, 2023 |
Publicly Available Date | Jun 28, 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 60-69 |
Book Title | Proceedings of the Fourth Workshop on Vision and Language |
Publisher URL | https://aclanthology.org/W15-2811/ |
Files
Comparing attribute classifiers for interactive language grounding
(686 Kb)
PDF
Publisher Licence URL
https://creativecommons.org/licenses/by-nc-sa/3.0/
You might also like
How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction
(2024)
Presentation / Conference Contribution
TaskMaster: A Novel Cross-platform Task-based Spoken Dialogue System for Human-Robot Interaction
(2023)
Presentation / Conference Contribution
MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit
(2023)
Presentation / Conference Contribution
A Visually-Aware Conversational Robot Receptionist
(2022)
Presentation / Conference Contribution
The CRECIL Corpus: a New Dataset for Extraction of Relations between Characters in Chinese Multi-party Dialogues
(2022)
Presentation / Conference Contribution
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search