Prof Amir Hussain A.Hussain@napier.ac.uk
Professor
Prof Amir Hussain A.Hussain@napier.ac.uk
Professor
Erik Cambria
Thomas Mazzocco
Marco Grassi
Qiu-Feng Wang
Tariq Durrani
A key aspect when trying to achieve natural interaction in machines is multimodality. Besides verbal communication, in fact, humans interact also through many other channels, e.g., facial expressions, gestures, eye contact, posture, and voice tone. Such channels convey not only semantics, but also emotional cues that are essential for interpreting the message transmitted. The importance of the affective information and the capability of properly managing it, in fact, has been more and more understood as fundamental for the development of a new generation of emotion-aware applications for several scenarios like e-learning, e-health, and human-computer interaction. To this end, this work investigates the adoption of different paradigms in the fields of text, vocal, and video analysis, in order to lay the basis for the development of an intelligent multimodal affective conversational agent.
Hussain, A., Cambria, E., Mazzocco, T., Grassi, M., Wang, Q.-F., & Durrani, T. (2012, November). Towards IMACA: Intelligent multimodal affective conversational agent. Presented at International Conference on Neural Information Processing: ICONIP 2012, Doha, Qatar
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | International Conference on Neural Information Processing: ICONIP 2012 |
Start Date | Nov 12, 2012 |
End Date | Nov 15, 2012 |
Publication Date | 2012 |
Deposit Date | Sep 23, 2019 |
Publisher | Springer |
Volume | 7663 LNCS |
Pages | 656-663 |
Series Title | Lecture Notes in Computer Science |
Series Number | 7663 |
Book Title | Neural Information Processing: 19th International Conference, ICONIP 2012, Doha, Qatar, November 12-15, 2012, Proceedings, Part I |
ISBN | 9783642344749 |
DOI | https://doi.org/10.1007/978-3-642-34475-6_79 |
Keywords | AI, HCI, Multimodal Sentiment Analysis |
Public URL | http://researchrepository.napier.ac.uk/Output/1793302 |
MA-Net: Resource-efficient multi-attentional network for end-to-end speech enhancement
(2024)
Journal Article
Artificial intelligence enabled smart mask for speech recognition for future hearing devices
(2024)
Journal Article
Are Foundation Models the Next-Generation Social Media Content Moderators?
(2024)
Journal Article
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search