Dr. Mandar Gogate M.Gogate@napier.ac.uk
Principal Research Fellow
Dr. Mandar Gogate M.Gogate@napier.ac.uk
Principal Research Fellow
Dr Kia Dashtipour K.Dashtipour@napier.ac.uk
Lecturer
Ahsan Adeel
Prof Amir Hussain A.Hussain@napier.ac.uk
Professor
ASPIRE is a a first of its kind, audiovisual speech corpus recorded in real noisy environment (such as cafe, restaurants) which can be used to support reliable evaluation of multi-modal Speech Filtering technologies. This dataset follows the same sentence format as the audio-visual Grid corpus. The recorded audiovisual speech corpus can be used for reliable evaluation of next generation multi-modal Speech Filtering technologies.
Gogate, M., Dashtipour, K., Adeel, A., & Hussain, A. (2020). ASPIRE - Real noisy audio-visual speech enhancement corpus. [Data]. https://doi.org/10.5281/zenodo.4585619
Online Publication Date | Nov 1, 2020 |
---|---|
Publication Date | Nov 1, 2020 |
Deposit Date | Apr 26, 2022 |
DOI | https://doi.org/10.5281/zenodo.4585619 |
Keywords | speech enhancement, speech separation, audio-visual, deep learning |
Public URL | http://researchrepository.napier.ac.uk/Output/2866106 |
Collection Date | Jun 1, 2018 |
Robust Real-time Audio-Visual Speech Enhancement based on DNN and GAN
(2024)
Journal Article
Arabic Sentiment Analysis Based on Word Embeddings and Deep Learning
(2023)
Journal Article
Arabic sentiment analysis using dependency-based rules and deep neural networks
(2022)
Journal Article
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search