ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition
Autor: | Chao Zhang, Fangyin Wei, Zhouchen Lin, Zhisheng Zhong |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
0209 industrial biotechnology Computer science Computer Vision and Pattern Recognition (cs.CV) Cognitive Neuroscience Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology Convolutional neural network Machine Learning Deep Learning 020901 industrial engineering & automation Transformation matrix Dimension (vector space) Artificial Intelligence 0202 electrical engineering electronic engineering information engineering Decomposition (computer science) Humans Overhead (computing) Tensor business.industry Deep learning Data Compression Benchmarking 020201 artificial intelligence & image processing Neural Networks Computer Artificial intelligence business Algorithm Tucker decomposition |
Popis: | Despite the recent success of deep learning models in numerous applications, their widespread use on mobile devices is seriously impeded by storage and computational requirements. In this paper, we propose a novel network compression method called Adaptive Dimension Adjustment Tucker decomposition (ADA-Tucker). With learnable core tensors and transformation matrices, ADA-Tucker performs Tucker decomposition of arbitrary-order tensors. Furthermore, we propose that weight tensors in networks with proper order and balanced dimension are easier to be compressed. Therefore, the high flexibility in decomposition choice distinguishes ADA-Tucker from all previous low-rank models. To compress more, we further extend the model to Shared Core ADA-Tucker (SCADA-Tucker) by defining a shared core tensor for all layers. Our methods require no overhead of recording indices of non-zero elements. Without loss of accuracy, our methods reduce the storage of LeNet-5 and LeNet-300 by ratios of 691 times and 233 times, respectively, significantly outperforming state of the art. The effectiveness of our methods is also evaluated on other three benchmarks (CIFAR-10, SVHN, ILSVRC12) and modern newly deep networks (ResNet, Wide-ResNet). 25 pages, 12 figures |
Databáze: | OpenAIRE |
Externí odkaz: |