Skip to main content

Research Repository

Advanced Search

Cross-modality interactive attention network for multispectral pedestrian detection

Zhang, L.; Liu, Zhiyong; Zhang, Shifeng; Yang, X.; Qiao, Hong; Huang, Kaizhu; Hussain, A.

Authors

L. Zhang

Zhiyong Liu

Shifeng Zhang

X. Yang

Hong Qiao

Kaizhu Huang



Abstract

Multispectral pedestrian detection is an emerging solution with great promise in many around-the-clock applications, such as automotive driving and security surveillance. To exploit the complementary nature and remedy contradictory appearance between modalities, in this paper, we propose a novel cross-modality interactive attention network that takes full advantage of the interactive properties of multispectral input sources. Specifically, we first utilize the color (RGB) and thermal streams to build up two detached feature hierarchy for each modality, then by taking the global features, correlations between two modalities are encoded in the attention module. Next, the channel responses of halfway feature maps are recalibrated adaptively for subsequent fusion operation. Our architecture is constructed in the multi-scale format to better deal with different scales of pedestrians, and the whole network is trained in an end-to-end way. The proposed method is extensively evaluated on the challenging KAIST multispectral pedestrian dataset and achieves state-of-the-art performance with high efficiency.

Journal Article Type Article
Acceptance Date Sep 25, 2018
Online Publication Date Sep 26, 2018
Publication Date 2019-10
Deposit Date Dec 12, 2018
Journal Information Fusion
Print ISSN 1566-2535
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 50
Pages 20-29
DOI https://doi.org/10.1016/j.inffus.2018.09.015
Keywords Signal Processing; Hardware and Architecture; Software; Information Systems
Public URL http://researchrepository.napier.ac.uk/Output/1434835