Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks
Autor: | Ioan Andrei Barsan, Raquel Urtasun, Jashan Shewakramani, Julieta Martinez, Ting Wei Liu, Wenyuan Zeng |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Artificial neural network business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) Code word Vector quantization Computer Science - Computer Vision and Pattern Recognition Machine Learning (stat.ML) Filter (signal processing) Data_CODINGANDINFORMATIONTHEORY Object detection Dimension (vector space) Convolutional code Statistics - Machine Learning Artificial intelligence business Quantization (image processing) Algorithm |
Zdroj: | CVPR |
Popis: | Compressing large neural networks is an important step for their deployment in resource-constrained computational platforms. In this context, vector quantization is an appealing framework that expresses multiple parameters using a single code, and has recently achieved state-of-the-art network compression on a range of core vision and natural language processing tasks. Key to the success of vector quantization is deciding which parameter groups should be compressed together. Previous work has relied on heuristics that group the spatial dimension of individual convolutional filters, but a general solution remains unaddressed. This is desirable for pointwise convolutions (which dominate modern architectures), linear layers (which have no notion of spatial dimension), and convolutions (when more than one filter is compressed to the same codeword). In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function. We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress. Finally, we rely on an annealed quantization algorithm to better compress the network and achieve higher final accuracy. We show results on image classification, object detection, and segmentation, reducing the gap with the uncompressed model by 40 to 70% with respect to the current state of the art. CVPR 21 Oral |
Databáze: | OpenAIRE |
Externí odkaz: |