No Fine-Tuning, No Cry: Robust SVD for Compressing Deep Networks
Autor: | Matan Weksler, Alaa Maalouf, Murad Tukan, Dan Feldman |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
Computer science
Computation MathematicsofComputing_GENERAL TP1-1185 matrix factorization Biochemistry Article Analytical Chemistry Matrix decomposition neural networks compression Singular value decomposition Electrical and Electronic Engineering Instrumentation Neurons Artificial neural network Chemical technology Approximation algorithm Computational geometry Data Compression robust low rank approximation Atomic and Molecular Physics and Optics Löwner ellipsoid TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES Benchmark (computing) Embedding Neural Networks Computer Algorithm Algorithms |
Zdroj: | Sensors Volume 21 Issue 16 Sensors (Basel, Switzerland) Sensors, Vol 21, Iss 5599, p 5599 (2021) |
ISSN: | 1424-8220 |
DOI: | 10.3390/s21165599 |
Popis: | A common technique for compressing a neural network is to compute the k-rank ℓ2 approximation Ak of the matrix A∈Rn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is the number of input neurons in the layer, n is the number in the next one, and Ak is stored in O((n+d)k) memory instead of O(nd). Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the k-rank ℓ2 approximation with ℓp, for p∈[1,2], which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p≥1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. |
Databáze: | OpenAIRE |
Externí odkaz: |