Skip to main content

Research Repository

Advanced Search

DDformer: Dimension decomposition transformer with semi-supervised learning for underwater image enhancement

Gao, Zhi; Yang, Jing; Jiang, Fengling; Jiao, Xixiang; Dashtipour, Kia; Gogate, Mandar; Hussain, Amir

Authors

Zhi Gao

Jing Yang

Fengling Jiang

Xixiang Jiao



Abstract

Vision-guided Autonomous Underwater Vehicles (AUVs) have gradually become significant tools for human exploration of the ocean. However, distorted images severely limit the visual ability, making it difficult to meet the needs of complex underwater environment perception. Fortunately, recent advancements in deep learning have led to rapid developments in underwater image enhancement. The emergence of the Transformer architecture has further enhanced the capabilities of deep learning. However, the direct application of Transformer to underwater image enhancement presents challenges in computing pixel-level global information and extracting local features. In this paper, we present a novel approach that merges dimension decomposition Transformer with semi-supervised learning for underwater image enhancement. To begin, dimension decomposition attention is proposed, which enables Transformer to compute global dependencies directly at the original scale and correct color distortions effectively. Concurrently, we employ convolutional neural networks to compensate for Transformer's limitations in extracting local features, thereby enriching details and textures. Subsequently, a multi-stage Transformer strategy is introduced to divide the network into high- and low-resolution stages for multi-scale global information extraction. It helps correct color distortions while enhancing the network's focus on regions with severe degradation. Moreover, we design a semi-supervised learning framework to reduce the reliance on paired datasets and construct a corresponding multi-scale fusion discriminator to enhance the sensitivity to input data. Experimental results demonstrate that our method outperforms state-of-the-art approaches, showcasing excellent learning and generalization capabilities on subjective perception and overall evaluation metrics. Furthermore, outstanding results highlight the significant improvements it brings to downstream visual engineering applications. The code of the proposed DDformer is available at https://github.com/ZhiGao-hfuu/DDformer

Citation

Gao, Z., Yang, J., Jiang, F., Jiao, X., Dashtipour, K., Gogate, M., & Hussain, A. (2024). DDformer: Dimension decomposition transformer with semi-supervised learning for underwater image enhancement. Knowledge-Based Systems, 297, Article 111977. https://doi.org/10.1016/j.knosys.2024.111977

Journal Article Type Article
Acceptance Date May 22, 2024
Online Publication Date May 23, 2024
Publication Date 2024-08
Deposit Date Aug 9, 2024
Print ISSN 0950-7051
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 297
Article Number 111977
DOI https://doi.org/10.1016/j.knosys.2024.111977
Public URL http://researchrepository.napier.ac.uk/Output/3692248