MuTrans: Multiple Transformers for Fusing Feature Pyramid on 2D and 3D Object Detection

作者:Xie, Bangquan; Yang, Liang*; Wei, Ailin; Weng, Xiaoxiong; Li, Bing
来源:IEEE Transactions on Image Processing, 2023, 32: 4407-4415.
DOI:10.1109/TIP.2023.3299190

摘要

One of the major components of the neural network, the feature pyramid plays a vital part in perception tasks, like object detection in autonomous driving. But it is a challenge to fuse multi-level and multi-sensor feature pyramids for object detection. This paper proposes a simple yet effective framework named MuTrans (Multiple Transformers) to fuse feature pyramid in single-stream 2D detector or two stream 3D detector. The MuTrans based on encoder-decoder focuses on the significant features via multiple Transformers. MuTrans encoder uses three innovative self-attention mechanisms: Spatial-wise BoxAlign attention (SB) for low-level spatial locations, Context-wise Affinity attention (CA) for high-level context information, and high-level attention for multi-level features. Then MuTrans decoder processes these significant proposals including the RoI and context affinity. Besides, the Low and High-level Fusion (LHF) in the encoder reduces the number of computational parameters. And the Pre-LN is utilized to accelerate the training convergence. LHF and Pre-LN are proven to reduce self-attention's computational complexity and slow training convergence. Our result demonstrates the higher detection accuracy of MuTrans than that of the baseline method, particularly in small object detection. MuTrans demonstrates a 2.1 higher detection accuracy on AP(S) index in small object detection on MS-COCO 2017 with ResNeXt-101 backbone, a 2.18 higher 3D detection accuracy (moderate difficulty) for small object-pedestrian on KITTI, and 6.85 higher RC index (Town05 Long) on CARLA urban driving simulator platform.

全文