Skip to main content

Research Repository

Advanced Search

Comparing attribute classifiers for interactive language grounding

Yu, Yanchao; Eshghi, Arash; Lemon, Oliver

Authors

Arash Eshghi

Oliver Lemon



Abstract

We address the problem of interactively learning perceptually grounded word meanings in a multimodal dialogue system. We design a semantic and visual processing system to support this and illustrate how they can be integrated. We then focus on comparing the performance (Precision, Recall, F1, AUC) of three state-of-the-art attribute classifiers for the purpose of interactive language grounding (MLKNN, DAP, and SVMs), on the aPascal-aYahoo datasets. In prior work, results were presented for object classification using these methods for attribute labelling, whereas we focus on their performance for attribute labelling itself. We find that while these methods can perform well for some of the attributes (e.g. head, ears, furry) none of these models has good performance over the whole attribute set, and none supports incremental learning. This leads us to suggest directions for future work.

Citation

Yu, Y., Eshghi, A., & Lemon, O. (2015). Comparing attribute classifiers for interactive language grounding. In Proceedings of the Fourth Workshop on Vision and Language (60-69)

Presentation Conference Type Conference Paper (Published)
Conference Name Fourth Workshop on Vision and Language
Start Date Sep 18, 2015
Publication Date 2015-09
Deposit Date Jun 28, 2023
Publicly Available Date Jun 28, 2023
Publisher Association for Computational Linguistics (ACL)
Pages 60-69
Book Title Proceedings of the Fourth Workshop on Vision and Language
Publisher URL https://aclanthology.org/W15-2811/

Files





You might also like



Downloadable Citations