Shibli Nisar
Cognitively inspired feature extraction and speech recognition for automated hearing loss testing
Nisar, Shibli; Tariq, Muhammad; Adeel, Ahsan; Gogate, Mandar; Hussain, Amir
Authors
Muhammad Tariq
Ahsan Adeel
Dr. Mandar Gogate M.Gogate@napier.ac.uk
Principal Research Fellow
Prof Amir Hussain A.Hussain@napier.ac.uk
Professor
Abstract
Hearing loss, a partial or total inability to hear, is one of the most commonly reported disabilities. A hearing test can be carried out by an audiologist to assess a patient’s auditory system. However, the procedure requires an appointment, which can result in delays and practitioner fees. In addition, there are often challenges associated with the unavailability of equipment and qualified practitioners, particularly in remote areas. This paper presents a novel idea that automatically identifies any hearing impairment based on a cognitively inspired feature extraction and speech recognition approach. The proposed system uses an adaptive filter bank with weighted Mel-frequency cepstral coefficients for feature extraction. The adaptive filter bank implementation is inspired by the principle of spectrum sensing in cognitive radio that is aware of its environment and adapts to statistical variations in the input stimuli by learning from the environment. Comparative performance evaluation demonstrates the potential of our automated hearing test method to achieve comparable results to the clinical ground truth, established by the expert audiologist’s tests. The overall absolute error of the proposed model when compared with the expert audiologist test is less than 4.9 dB and 4.4 dB for the pure tone and speech audiometry tests, respectively. The overall accuracy achieved is 96.67% with a hidden Markov model (HMM). The proposed method potentially offers a second opinion to audiologists, and serves as a cost-effective pre-screening test to predict hearing loss at an early stage. In future work, authors intend to explore the application of advanced deep learning and optimization approaches to further enhance the performance of the automated testing prototype considering imperfect datasets with real-world background noise.
Citation
Nisar, S., Tariq, M., Adeel, A., Gogate, M., & Hussain, A. (2019). Cognitively inspired feature extraction and speech recognition for automated hearing loss testing. Cognitive Computation, 11(4), 489-502. https://doi.org/10.1007/s12559-018-9607-4
Journal Article Type | Article |
---|---|
Acceptance Date | Oct 23, 2018 |
Online Publication Date | Feb 13, 2019 |
Publication Date | 2019 |
Deposit Date | Dec 10, 2019 |
Print ISSN | 1866-9956 |
Electronic ISSN | 1866-9964 |
Publisher | BMC |
Peer Reviewed | Peer Reviewed |
Volume | 11 |
Issue | 4 |
Pages | 489-502 |
DOI | https://doi.org/10.1007/s12559-018-9607-4 |
Keywords | Hearing loss, Speech recognition, Machine learning, Automation, Cognitive radio |
Public URL | http://researchrepository.napier.ac.uk/Output/2275819 |
You might also like
Robust Real-time Audio-Visual Speech Enhancement based on DNN and GAN
(2024)
Journal Article
Arabic Sentiment Analysis Based on Word Embeddings and Deep Learning
(2023)
Journal Article
Arabic sentiment analysis using dependency-based rules and deep neural networks
(2022)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search