Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
Autor: | Yukinobu Sakata, Shuhei Nitta, Akiyuki Tanizawa, Taiji Suzuki, Atsushi Yaguchi |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Basis (linear algebra) Artificial neural network Computational complexity theory Rank (linear algebra) Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Machine Learning (stat.ML) Machine Learning (cs.LG) Range (mathematics) Approximation error Statistics - Machine Learning Scalability Singular value decomposition Algorithm |
Zdroj: | IJCAI |
Popis: | Compressing DNNs is important for the real-world applications operating on resource-constrained devices. However, we typically observe drastic performance deterioration when changing model size after training is completed. Therefore, retraining is required to resume the performance of the compressed models suitable for different devices. In this paper, we propose Decomposable-Net (the network decomposable in any size), which allows flexible changes to model size without retraining. We decompose weight matrices in the DNNs via singular value decomposition and adjust ranks according to the target model size. Unlike the existing low-rank compression methods that specialize the model to a fixed size, we propose a novel backpropagation scheme that jointly minimizes losses for both of full- and low-rank networks. This enables not only to maintain the performance of a full-rank network {\it without retraining} but also to improve low-rank networks in multiple sizes. Additionally, we introduce a simple criterion for rank selection that effectively suppresses approximation error. In experiments on the ImageNet classification task, Decomposable-Net yields superior accuracy in a wide range of model sizes. In particular, Decomposable-Net achieves the top-1 accuracy of $73.2\%$ with $0.27\times$MACs with ResNet-50, compared to Tucker decomposition ($67.4\% / 0.30\times$), Trained Rank Pruning ($70.6\% / 0.28\times$), and universally slimmable networks ($71.4\% / 0.26\times$). 13 pages, 6 figures, 5 tables. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), pages 3249-3256 |
Databáze: | OpenAIRE |
Externí odkaz: |