Skip to main content

Research Repository

Advanced Search

CochleaNet: A robust language-independent audio-visual model for real-time speech enhancement

Gogate, Mandar; Dashtipour, Kia; Adeel, Ahsan; Hussain, Amir

Authors

Ahsan Adeel



Abstract

Noisy situations cause huge problems for the hearing-impaired, as hearing aids often make speech more audible but do not always restore intelligibility. In noisy settings, humans routinely exploit the audio-visual (AV) nature of speech to selectively suppress background noise and focus on the target speaker. In this paper, we present a novel language-, noise- and speaker-independent AV deep neural network (DNN) architecture, termed CochleaNet, for causal or real-time speech enhancement (SE). The model jointly exploits noisy acoustic cues and noise robust visual cues to focus on the desired speaker and improve speech intelligibility. The proposed SE framework is evaluated using a first of its kind AV binaural speech corpus, ASPIRE, recorded in real noisy environments, including cafeteria and restaurant settings. We demonstrate superior performance of our approach in terms of both objective measures and subjective listening tests, over state-of-the-art SE approaches, including recent DNN based SE models. In addition, our work challenges a popular belief that scarcity of a multi-lingual, large vocabulary AV corpus and a wide variety of noises is a major bottleneck to build robust language, speaker and noise-independent SE systems. We show that a model trained on a synthetic mixture of the benchmark GRID corpus (with 33 speakers and a small English vocabulary) and CHiME 3 noises (comprising bus, pedestrian, cafeteria, and street noises) can generalise well, not only on large vocabulary corpora with a wide variety of speakers and noises, but also on completely unrelated languages such as Mandarin.

Journal Article Type Article
Acceptance Date Apr 11, 2020
Online Publication Date Apr 21, 2020
Publication Date 2020-11
Deposit Date Oct 12, 2020
Journal Information Fusion
Print ISSN 1566-2535
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 63
Pages 273-285
DOI https://doi.org/10.1016/j.inffus.2020.04.001
Keywords Audio-Visual, Speech enhancement, Speech separation, Deep learning, Real noisy audio-visual corpus, Speaker independent, Noise-independent, Language-independent, Multi-modal Hearing aids
Public URL http://researchrepository.napier.ac.uk/Output/2692701