Skip to main content

Research Repository

Advanced Search

Deep and sparse learning in speech and language processing: An overview

Wang, Dong; Zhou, Qiang; Hussain, Amir

Authors

Dong Wang

Qiang Zhou



Abstract

Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including speech recognition, speech synthesis, document classification and question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain. On the other hand, sparse coding representation has also gained similar success in SLP, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic. Recently, research in these two directions is leading to increasing cross-fertlisation of ideas, thus a unified Sparse Deep or Deep Sparse learning framework warrants much attention. This paper aims to provide an overview of growing interest in this unified framework, and also outlines future research possibilities in this multi-disciplinary area.

Presentation Conference Type Conference Paper (Published)
Conference Name BICS 2016: International Conference on Brain Inspired Cognitive Systems
Start Date Nov 28, 2016
End Date Nov 30, 2016
Online Publication Date Nov 13, 2016
Publication Date 2016
Deposit Date Oct 4, 2019
Publisher Springer
Pages 171-183
Series Title Lecture Notes in Computer Science
Series Number 10023
Series ISSN 0302-9743
Book Title Advances in Brain Inspired Cognitive Systems
ISBN 978-3-319-49684-9
DOI https://doi.org/10.1007/978-3-319-49685-6_16
Keywords Deep learning; Sparse coding; Speech processing; Language processing
Public URL http://researchrepository.napier.ac.uk/Output/1792683