摘要

Although convolutional neural networks (CNNs) show great abilities in image classification, improving their performance is still challenging for shallownetworks. The redundancy of the network increases when more convolution kernels are adopted in the network. To alleviate this defect, we propose two methods including Weight Correlation Reduction (WCR) and Features Normalization (FN) to boost the performance of shallow networks. The formal method is designed to eliminate weight redundancy, while the latter is used to increase the sparsity of learned deep features. On benchmarks CIFAR-10 and STL-10, the accuracy rate increased by 2.29% and 4.79% for shallow networks, respectively, which indicates the effectiveness of the proposed methods.

  • 单位
    江苏科技大学