Skip to main content

Research Repository

Advanced Search

All Outputs (182)

In-Depth Evaluation and Analysis of Hyperspectral Unmixing Algorithms with Cognitive Models (2025)
Presentation / Conference Contribution
Deng, S., Ren, J., Chen, R., Zhao, H., & Hussain, A. (2024, December). In-Depth Evaluation and Analysis of Hyperspectral Unmixing Algorithms with Cognitive Models. Presented at 14th International Conference, BICS 2024, Hefei, China

This paper evaluates several representative algorithms on real datasets. It analyzes Vertex Component Analysis (VCA), Total Variation Regularized Reweighted Sparse Nonnegative Matrix Factorization (RSNMF), Sparse Hyperspectral Unmixing (HU) with Mixe... Read More about In-Depth Evaluation and Analysis of Hyperspectral Unmixing Algorithms with Cognitive Models.

Development of multi-modal hearing aids to enhance speech perception in noise (2024)
Presentation / Conference Contribution
Goman, A., Gogate, M., Hussain, A., Dashtipour, K., Buck, B., Akeroyd, M., Anwar, U., Arslan, T., Hardy, D., & Hussain, A. (2024, September). Development of multi-modal hearing aids to enhance speech perception in noise. Presented at World Congress of Audiology, Paris, France

A Framework for Speech Enhancement based on Audio Signal and Speaker Embeddings (2024)
Presentation / Conference Contribution
Nazemi, A., Sami, A., Sami, M., & Hussain, A. (2024, September). A Framework for Speech Enhancement based on Audio Signal and Speaker Embeddings. Presented at 3rd COG-MHEAR Workshop on Audio-Visual Speech Enhancement (AVSEC), Kos Island, Greece

This study addresses the challenge of speech enhancement within an audio-only context. Our proposed framework extracts speaker embeddings and voice signals, subsequently integrating these components to synthesise a voice based on the extracted data.... Read More about A Framework for Speech Enhancement based on Audio Signal and Speaker Embeddings.

Iterative Speech Enhancement with Transformers (2024)
Presentation / Conference Contribution
Nazemi, A., Sami, A., Sami, M., & Hussain, A. (2024, September). Iterative Speech Enhancement with Transformers. Presented at 3rd COG-MHEAR Workshop on Audio-Visual Speech Enhancement (AVSEC), Kos, Greece

Enhancing audio quality in audio-video speech enhancement (AVSE) is a crucial step in improving the performance of speech recognition systems, particularly by integrating visual and auditory data to create more robust and accurate models. This study... Read More about Iterative Speech Enhancement with Transformers.

Evaluating the benefits of using cameras with hearing aids to enhance speech understanding (2024)
Presentation / Conference Contribution
Hardy, D., Buck, B., Goman, A., Hussain, A., Kirton-Wingate, J., Gogate, M., Dashtipour, K., Akeroyd, M., & Hussain, A. (2024, August). Evaluating the benefits of using cameras with hearing aids to enhance speech understanding. Poster presented at International Hearing-Aid Research Conference (IHCON 2024), Lake Tahoe, California, USA

Multi-modal hearing aids could make it easier to hear one voice when several people are talking in noisy situations. Including:
● When users do not want to miss or misunderstand information and do not want to have to ask for information to be repeat... Read More about Evaluating the benefits of using cameras with hearing aids to enhance speech understanding.

Deep Learning-Based Receiver Design for IoT Multi-User Uplink 5G-NR System (2024)
Presentation / Conference Contribution
Gupta, A., Bishnu, A., Ratnarajah, T., Adeel, A., Hussain, A., & Sellathurai, M. (2023, December). Deep Learning-Based Receiver Design for IoT Multi-User Uplink 5G-NR System. Presented at GLOBECOM 2023 - 2023 IEEE Global Communications Conference, Kuala Lumpur, Malaysia

Designing an efficient receiver for multiple users transmitting orthogonal frequency-division multiplexing signals to the base station remain a challenging interference-limited problem in 5G-new radio (5G-NR) system. This can lead to stagnation of de... Read More about Deep Learning-Based Receiver Design for IoT Multi-User Uplink 5G-NR System.

Socioeconomic and Geographic Barriers to Hearing Healthcare: The Patient's Perspective (2024)
Presentation / Conference Contribution
Kirkwood, M., Hussain, A., Porter-Armstrong, A., & Goman, A. (2024, February). Socioeconomic and Geographic Barriers to Hearing Healthcare: The Patient's Perspective. Poster presented at American Auditory Society 52nd Annual Scientific and Technology Meeting, Scottsdale, Arizona, USA

Objectives: Despite the widespread prevalence of hearing loss, many individuals encounter substantial barriers to accessing hearing healthcare and using technology. These challenges include financial
limitations, navigating complex healthcare syste... Read More about Socioeconomic and Geographic Barriers to Hearing Healthcare: The Patient's Perspective.

5G-IoT Cloud based Demonstration of Real-Time Audio-Visual Speech Enhancement for Multimodal Hearing-aids (2023)
Presentation / Conference Contribution
Gupta, A., Bishnu, A., Gogate, M., Dashtipour, K., Arslan, T., Adeel, A., Hussain, A., Ratnarajah, T., & Sellathurai, M. (2023, August). 5G-IoT Cloud based Demonstration of Real-Time Audio-Visual Speech Enhancement for Multimodal Hearing-aids. Presented at Interspeech 2023, Dublin, Ireland

Over twenty percent of the world's population suffers from some form of hearing loss, making it one of the most significant public health challenges. Current hearing aids commonly amplify noises while failing to improve speech comprehension in crowde... Read More about 5G-IoT Cloud based Demonstration of Real-Time Audio-Visual Speech Enhancement for Multimodal Hearing-aids.

Application for Real-time Audio-Visual Speech Enhancement (2023)
Presentation / Conference Contribution
Gogate, M., Dashtipour, K., & Hussain, A. (2023, August). Application for Real-time Audio-Visual Speech Enhancement. Presented at Interspeech 2023, Dublin, Ireland

This short paper demonstrates a first of its kind audio-visual (AV) speech enhancement (SE) desktop application that isolates, in real-time, the voice of a target speaker from noisy audio input. The deep neural network model integrated in this applic... Read More about Application for Real-time Audio-Visual Speech Enhancement.

Solving the cocktail party problem using Multi-modal Hearing Assistive Technology Prototype (2023)
Presentation / Conference Contribution
Gogate, M., Dashtipour, K., & Hussain, A. (2023, December). Solving the cocktail party problem using Multi-modal Hearing Assistive Technology Prototype. Presented at Acoustics 2023, Sydney, Australia

Hearing loss is a major global health problem, affecting over 1.5 billion people. According to estimations by the World Health Organization, 83% of those who could benefit from hearing assistive devices do not use them. The limited adoption of hearin... Read More about Solving the cocktail party problem using Multi-modal Hearing Assistive Technology Prototype.

Resolving the Decreased Rank Attack in RPL’s IoT Networks (2023)
Presentation / Conference Contribution
Ghaleb, B., Al-Dubai, A., Hussain, A., Ahmad, J., Romdhani, I., & Jaroucheh, Z. (2023, June). Resolving the Decreased Rank Attack in RPL’s IoT Networks. Presented at 19th Annual International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT 2023), Pafos, Cyprus

The Routing Protocol for Low power and Lossy networks (RPL) has been developed by the Internet Engineering Task Force (IETF) standardization body to serve as a part of the 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks) standard, a core... Read More about Resolving the Decreased Rank Attack in RPL’s IoT Networks.

Wearing multi-modal hearing aids (2023)
Presentation / Conference Contribution
Hardy, D., Akeroyd, M., & Hussain, A. (2023, September). Wearing multi-modal hearing aids. Presented at Basic Auditory Science, Imperial College London, UK

Hearing aids with audio-visual inputs could select and single out speech in noisy environments through use of cameras. Though tried repeatedly in the past, modern technology could revolutionise them. These multi-modal hearing aids could make it easie... Read More about Wearing multi-modal hearing aids.

Towards Pose-Invariant Audio-Visual Speech Enhancement in the Wild for Next-Generation Multi-Modal Hearing Aids (2023)
Presentation / Conference Contribution
Gogate, M., Dashtipour, K., & Hussain, A. (2023, June). Towards Pose-Invariant Audio-Visual Speech Enhancement in the Wild for Next-Generation Multi-Modal Hearing Aids. Presented at 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), Rhodes Island, Greece

Classical audio-visual (AV) speech enhancement (SE) and separation methods have been successful at operating under constrained environments; however, the speech quality and intelligibility improvement is significantly reduced in unconstrained real-wo... Read More about Towards Pose-Invariant Audio-Visual Speech Enhancement in the Wild for Next-Generation Multi-Modal Hearing Aids.

Audio-visual speech enhancement and separation by utilizing multi-modal self-supervised embeddings (2023)
Presentation / Conference Contribution
Chern, I.-C., Hung, K.-H., Chen, Y.-T., Hussain, T., Gogate, M., Hussain, A., Tsao, Y., & Hou, J.-C. (2023, June). Audio-visual speech enhancement and separation by utilizing multi-modal self-supervised embeddings. Presented at 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), Rhodes Island, Greece

AV-HuBERT, a multi-modal self-supervised learning model, has been shown to be effective for categorical problems such as automatic speech recognition and lip-reading. This suggests that useful audio-visual speech representations can be obtained via u... Read More about Audio-visual speech enhancement and separation by utilizing multi-modal self-supervised embeddings.

Frequency-Domain Functional Links For Nonlinear Feedback Cancellation In Hearing Aids (2023)
Presentation / Conference Contribution
Nezamdoust, A., Gogate, M., Dashtipour, K., Hussain, A., & Comminiello, D. (2023, June). Frequency-Domain Functional Links For Nonlinear Feedback Cancellation In Hearing Aids. Presented at 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), Rhodes Island, Greece

The problem of feedback cancellation can be seen as a function approximation task, which often is nonlinear in real-world hearing assistive technologies. Nonlinear methods adopted for this task must exhibit outstanding modeling performance and reduce... Read More about Frequency-Domain Functional Links For Nonlinear Feedback Cancellation In Hearing Aids.

Audio-visual speech enhancement and separation by leveraging multimodal self-supervised embeddings (2023)
Presentation / Conference Contribution
Chern, I.-C., Hung, K.-H., Chen, Y.-T., Hussain, T., Gogate, M., Hussain, A., Tsao, Y., & Hou, J.-C. (2023, June). Audio-visual speech enhancement and separation by leveraging multimodal self-supervised embeddings. Presented at 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), Rhodes Island, Greece

AV-HuBERT, a multi-modal self-supervised learning model, has been shown to be effective for categorical problems such as automatic speech recognition and lip-reading. This suggests that useful audio-visual speech representations can be obtained via u... Read More about Audio-visual speech enhancement and separation by leveraging multimodal self-supervised embeddings.

Towards individualised speech enhancement: An SNR preference learning system for multi-modal hearing aids (2023)
Presentation / Conference Contribution
Kirton-Wingate, J., Ahmed, S., Gogate, M., Tsao, Y., & Hussain, A. (2023, June). Towards individualised speech enhancement: An SNR preference learning system for multi-modal hearing aids. Presented at 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), Rhodes Island, Greece

Since the advent of deep learning (DL), speech enhancement (SE) models have performed well under a variety of noise conditions. However, such systems may still introduce sonic artefacts, sound unnatural, and restrict the ability for a user to hear am... Read More about Towards individualised speech enhancement: An SNR preference learning system for multi-modal hearing aids.

Live Demonstration: Cloud-based Audio-Visual Speech Enhancement in Multimodal Hearing-aids (2023)
Presentation / Conference Contribution
Bishnu, A., Gupta, A., Gogate, M., Dashtipour, K., Arslan, T., Adeel, A., Hussain, A., Sellathurai, M., & Ratnarajah, T. (2023, May). Live Demonstration: Cloud-based Audio-Visual Speech Enhancement in Multimodal Hearing-aids. Presented at 2023 IEEE International Symposium on Circuits and Systems (ISCAS), Monterey, California

Hearing loss is among the most serious public health problems, affecting as much as 20% of the worldwide population. Even cutting-edge multi-channel audio-only speech enhancement (SE) algorithms used in modern hearing aids face significant hurdles si... Read More about Live Demonstration: Cloud-based Audio-Visual Speech Enhancement in Multimodal Hearing-aids.

Live Demonstration: Real-time Multi-modal Hearing Assistive Technology Prototype (2023)
Presentation / Conference Contribution
Gogate, M., Hussain, A., Dashtipour, K., & Hussain, A. (2023, May). Live Demonstration: Real-time Multi-modal Hearing Assistive Technology Prototype. Presented at 2023 IEEE International Symposium on Circuits and Systems (ISCAS), Monterey, California

Hearing loss affects at least 1.5 billion people globally. The WHO estimates 83% of people who could benefit from hearing aids do not use them. Barriers to HA uptake are multifaceted but include ineffectiveness of current HA technology in noisy envir... Read More about Live Demonstration: Real-time Multi-modal Hearing Assistive Technology Prototype.