Compressing Gradients by Exploiting Temporal Correlation in Momentum-SGD
Autor: | Stark C. Draper, Tharindu Adikari |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Momentum (technical analysis) Computer Science - Artificial Intelligence Computer science Computation Contrast (statistics) Bottleneck Machine Learning (cs.LG) Reduction (complexity) Artificial Intelligence (cs.AI) Computer Science - Distributed Parallel and Cluster Computing Norm (mathematics) Compression (functional analysis) Convergence (routing) Distributed Parallel and Cluster Computing (cs.DC) Algorithm |
Zdroj: | IEEE Journal on Selected Areas in Information Theory. 2:970-986 |
ISSN: | 2641-8770 |
DOI: | 10.1109/jsait.2021.3103494 |
Popis: | An increasing bottleneck in decentralized optimization is communication. Bigger models and growing datasets mean that decentralization of computation is important and that the amount of information exchanged is quickly growing. While compression techniques have been introduced to cope with the latter, none has considered leveraging the temporal correlations that exist in consecutive vector updates. An important example is distributed momentum-SGD where temporal correlation is enhanced by the low-pass-filtering effect of applying momentum. In this paper we design and analyze compression methods that exploit temporal correlation in systems both with and without error-feedback. Experiments with the ImageNet dataset demonstrate that our proposed methods offer significant reduction in the rate of communication at only a negligible increase in computation complexity. We further analyze the convergence of SGD when compression is applied with error-feedback. In the literature, convergence guarantees are developed only for compressors that provide error-bounds point-wise, i.e., for each input to the compressor. In contrast, many important codes (e.g. rate-distortion codes) provide error-bounds only in expectation and thus provide a more general guarantee. In this paper we prove the convergence of SGD under an expected error assumption by establishing a bound for the minimum gradient norm. Comment: This paper was presented in part at the 11th International Symposium on Topics in Coding (ISTC), Montreal, QC, Canada, August 2021, and the paper has been accepted for publication in the IEEE Journal on Selected Areas in Information Theory (JSAIT) Volume 2, Issue 3 (2021), https://ieeexplore.ieee.org/document/9511618 |
Databáze: | OpenAIRE |
Externí odkaz: |