Skip to main content

Research Repository

Advanced Search

Benchmarking multimodal sentiment analysis

Cambria, E.; Hazarika, D.; Poria, S.; Hussain, A.; Subramanyam, R.B.V.

Authors

E. Cambria

D. Hazarika

S. Poria

R.B.V. Subramanyam



Abstract

We propose a deep-learning-based framework for multimodal sentiment analysis and emotion recognition. In particular, we leverage on the power of convolutional neural networks to obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research, e.g., role of speaker-independent models, importance of different modalities, and generalizability. The framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.

Presentation Conference Type Conference Paper (Published)
Conference Name 18th International Conference, CICLing 2017
Start Date Apr 17, 2017
End Date Apr 23, 2017
Online Publication Date Oct 10, 2018
Publication Date Oct 10, 2018
Deposit Date Sep 23, 2019
Publisher Springer
Pages 166-179
Series Title Lecture Notes in Computer Science
Series Number 10762
Series ISSN 0302-9743
Book Title Computational Linguistics and Intelligent Text Processing
ISBN 978-3-319-77115-1
DOI https://doi.org/10.1007/978-3-319-77116-8_13
Keywords Multimodal sentiment analysis, Emotion detection, Deep learning, Convolutional neural networks
Public URL http://researchrepository.napier.ac.uk/Output/1792202