Weight-Dependent Gates for Differentiable Neural Network Pruning

Autor: Weiqun Wu, Yun Li, Baoqun Yin, Chi Zhang, Haotian Yao, Xiangyu Zhang, Zechun Liu
Rok vydání: 2020
Předmět:
Zdroj: Computer Vision – ECCV 2020 Workshops ISBN: 9783030682378
ECCV Workshops (5)
DOI: 10.1007/978-3-030-68238-5_3
Popis: In this paper, we propose a simple and effective network pruning framework, which introduces novel weight-dependent gates to prune filter adaptively. We argue that the pruning decision should depend on the convolutional weights, in other words, it should be a learnable function of filter weights. We thus construct the weight-dependent gates (W-Gates) to learn the information from filter weights and obtain binary filter gates to prune or keep the filters automatically. To prune the network under hardware constraint, we train a Latency Predict Net (LPNet) to estimate the hardware latency of candidate pruned networks. Based on the proposed LPNet, we can optimize W-Gates and the pruning ratio of each layer under latency constraint. The whole framework is differentiable and can be optimized by gradient-based method to achieve a compact network with better trade-off between accuracy and efficiency. We have demonstrated the effectiveness of our method on Resnet34 and Resnet50, achieving up to 1.33/1.28 higher Top-1 accuracy with lower hardware latency on ImageNet. Compared with state-of-the-art pruning methods, our method achieves superior performance(This work is done when Yun Li, Weiqun Wu and Zechun Liu are interns at Megvii Inc (Face++)).
Databáze: OpenAIRE