Skip to main content

Research Repository

Advanced Search

Outputs (12)

Application for Real-time Audio-Visual Speech Enhancement (2023)
Presentation / Conference Contribution
Gogate, M., Dashtipour, K., & Hussain, A. (2023, August). Application for Real-time Audio-Visual Speech Enhancement. Presented at Interspeech 2023, Dublin, Ireland

This short paper demonstrates a first of its kind audio-visual (AV) speech enhancement (SE) desktop application that isolates, in real-time, the voice of a target speaker from noisy audio input. The deep neural network model integrated in this applic... Read More about Application for Real-time Audio-Visual Speech Enhancement.

5G-IoT Cloud based Demonstration of Real-Time Audio-Visual Speech Enhancement for Multimodal Hearing-aids (2023)
Presentation / Conference Contribution
Gupta, A., Bishnu, A., Gogate, M., Dashtipour, K., Arslan, T., Adeel, A., Hussain, A., Ratnarajah, T., & Sellathurai, M. (2023, August). 5G-IoT Cloud based Demonstration of Real-Time Audio-Visual Speech Enhancement for Multimodal Hearing-aids. Presented a

Over twenty percent of the world's population suffers from some form of hearing loss, making it one of the most significant public health challenges. Current hearing aids commonly amplify noises while failing to improve speech comprehension in crowde... Read More about 5G-IoT Cloud based Demonstration of Real-Time Audio-Visual Speech Enhancement for Multimodal Hearing-aids.

Solving the cocktail party problem using Multi-modal Hearing Assistive Technology Prototype (2023)
Presentation / Conference Contribution
Gogate, M., Dashtipour, K., & Hussain, A. (2023, December). Solving the cocktail party problem using Multi-modal Hearing Assistive Technology Prototype. Presented at Acoustics 2023, Sydney, Australia

Hearing loss is a major global health problem, affecting over 1.5 billion people. According to estimations by the World Health Organization, 83% of those who could benefit from hearing assistive devices do not use them. The limited adoption of hearin... Read More about Solving the cocktail party problem using Multi-modal Hearing Assistive Technology Prototype.

Resolving the Decreased Rank Attack in RPL’s IoT Networks (2023)
Presentation / Conference Contribution
Ghaleb, B., Al-Duba, A., Hussain, A., Romdhani, I., & Jaroucheh, Z. (2023). Resolving the Decreased Rank Attack in RPL’s IoT Networks. In 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-Io

The Routing Protocol for Low power and Lossy networks (RPL) has been developed by the Internet Engineering Task Force (IETF) standardization body to serve as a part of the 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks) standard, a core... Read More about Resolving the Decreased Rank Attack in RPL’s IoT Networks.

Towards Pose-Invariant Audio-Visual Speech Enhancement in the Wild for Next-Generation Multi-Modal Hearing Aids (2023)
Presentation / Conference Contribution
Gogate, M., Dashtipour, K., & Hussain, A. (2023). Towards Pose-Invariant Audio-Visual Speech Enhancement in the Wild for Next-Generation Multi-Modal Hearing Aids. In 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops

Classical audio-visual (AV) speech enhancement (SE) and separation methods have been successful at operating under constrained environments; however, the speech quality and intelligibility improvement is significantly reduced in unconstrained real-wo... Read More about Towards Pose-Invariant Audio-Visual Speech Enhancement in the Wild for Next-Generation Multi-Modal Hearing Aids.

Audio-visual speech enhancement and separation by utilizing multi-modal self-supervised embeddings (2023)
Presentation / Conference Contribution
Chern, I., Hung, K., Chen, Y., Hussain, T., Gogate, M., Hussain, A., Tsao, Y., & Hou, J. (2023, June). Audio-visual speech enhancement and separation by utilizing multi-modal self-supervised embeddings. Presented at 2023 IEEE International Conference on A

AV-HuBERT, a multi-modal self-supervised learning model, has been shown to be effective for categorical problems such as automatic speech recognition and lip-reading. This suggests that useful audio-visual speech representations can be obtained via u... Read More about Audio-visual speech enhancement and separation by utilizing multi-modal self-supervised embeddings.

Frequency-Domain Functional Links For Nonlinear Feedback Cancellation In Hearing Aids (2023)
Presentation / Conference Contribution
Nezamdoust, A., Gogate, M., Dashtipour, K., Hussain, A., & Comminiello, D. (2023, June). Frequency-Domain Functional Links For Nonlinear Feedback Cancellation In Hearing Aids. Presented at 2023 IEEE International Conference on Acoustics, Speech, and Signa

The problem of feedback cancellation can be seen as a function approximation task, which often is nonlinear in real-world hearing assistive technologies. Nonlinear methods adopted for this task must exhibit outstanding modeling performance and reduce... Read More about Frequency-Domain Functional Links For Nonlinear Feedback Cancellation In Hearing Aids.

Audio-visual speech enhancement and separation by leveraging multimodal self-supervised embeddings (2023)
Presentation / Conference Contribution
Chern, I., Hung, K., Chen, Y., Hussain, T., Gogate, M., Hussain, A., Tsao, Y., & Hou, J. (2023, June). Audio-visual speech enhancement and separation by leveraging multimodal self-supervised embeddings. Presented at 2023 IEEE International Conference on A

AV-HuBERT, a multi-modal self-supervised learning model, has been shown to be effective for categorical problems such as automatic speech recognition and lip-reading. This suggests that useful audio-visual speech representations can be obtained via u... Read More about Audio-visual speech enhancement and separation by leveraging multimodal self-supervised embeddings.

Towards individualised speech enhancement: An SNR preference learning system for multi-modal hearing aids (2023)
Presentation / Conference Contribution
Kirton-Wingate, J., Ahmed, S., Gogate, M., Tsao, Y., & Hussain, A. (2023, June). Towards individualised speech enhancement: An SNR preference learning system for multi-modal hearing aids. Presented at 2023 IEEE International Conference on Acoustics, Speec

Since the advent of deep learning (DL), speech enhancement (SE) models have performed well under a variety of noise conditions. However, such systems may still introduce sonic artefacts, sound unnatural, and restrict the ability for a user to hear am... Read More about Towards individualised speech enhancement: An SNR preference learning system for multi-modal hearing aids.

Live Demonstration: Real-time Multi-modal Hearing Assistive Technology Prototype (2023)
Presentation / Conference Contribution
Gogate, M., Hussain, A., Dashtipour, K., & Hussain, A. (2023). Live Demonstration: Real-time Multi-modal Hearing Assistive Technology Prototype. In IEEE ISCAS 2023 Symposium Proceedings. https://doi.org/10.1109/iscas46773.2023.10182070

Hearing loss affects at least 1.5 billion people globally. The WHO estimates 83% of people who could benefit from hearing aids do not use them. Barriers to HA uptake are multifaceted but include ineffectiveness of current HA technology in noisy envir... Read More about Live Demonstration: Real-time Multi-modal Hearing Assistive Technology Prototype.