Pushing the limits of RNN Compression
Autor: | Ganesh Dasika, Jesse Beu, Matthew Mattina, Chu Zhou, Dibakar Gope, Urmish Thakker, Igor Fedorov |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Kronecker product Computer Science - Machine Learning Computer science 05 social sciences Resource constrained Inference Machine Learning (stat.ML) 010501 environmental sciences 01 natural sciences Machine Learning (cs.LG) Task (project management) symbols.namesake Recurrent neural network Statistics - Machine Learning Compression (functional analysis) 0502 economics and business Task analysis symbols 050207 economics Algorithm 0105 earth and related environmental sciences Efficient energy use |
Zdroj: | EMC2@NeurIPS |
DOI: | 10.1109/emc2-nips53020.2019.00012 |
Popis: | Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size. As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task accuracy. This paper introduces a method to compress RNNs for resource constrained environments using Kronecker product (KP). KPs can compress RNN layers by 16-38x with minimal accuracy loss. We show that KP can beat the task accuracy achieved by other state-of-the-art compression techniques (pruning and low-rank matrix factorization) across 4 benchmarks spanning 3 different applications, while simultaneously improving inference run-time. Comment: 6 pages. arXiv admin note: substantial text overlap with arXiv:1906.02876 |
Databáze: | OpenAIRE |
Externí odkaz: |