Superdense-scale network for semantic segmentation

Authors:Li, Zhiqiang; Jiang, Jie; Chen, Xi*; Qi, Honggang; Li, Qingli; Liu, Jiapeng; Zheng, Laiwen; Liu, Min; Zhang, Yundong
Source:NEUROCOMPUTING, 2022, 504: 30-41.
DOI:10.1016/j.neucom.2022.06.103

Summary

Great progress has been made in semantic segmentation based on deep convolutional neural networks. However, semantic segmentation in complex scenes remains challenging due to the large-scale variation problem. To handle this problem, the existing methods usually employ multiple receptive fields to cap-ture multiscale features. Some works have verified that the denser the different receptive fields (scales), the easier it is to address the large-scale variation problem. To make denser scales, we propose a superdense-scale network (SDSNet). Specifically, we design a simple yet effective structure named the parallel-serial structure of atrous convolutions (PSSAC) in which superdense-scale high-level features are captured by explicitly adjusting the neuron's receptive field. The PSSAC is an improvement over ASPP and DenseASPP by employing exponentially increasing scales with a serially connected multiple parallel structure. To extract more accurate features, we construct an SDSNet consisting of a modified aligned Xcepiton71 backbone followed by a PSSAC. Extensive experiments of semantic segmentation are conducted to evaluate our SDSNet on three datasets, namely, Cityscapes, PASCAL VOC 2012, and ADE20K. Experimental results show that our SDSNet achieves state-of-the-art performance.

  • Institution
    中国科学院研究生院

Full-Text