A Hitchhiker’s Guide On Distributed Training Of Deep Neural Networks
Autor: | Kuntal Dey, Rajiv Ratn Shah, Manraj Singh Grover, Karanbir Chahal |
---|---|
Rok vydání: | 2020 |
Předmět: |
Computer Networks and Communications
Computer science business.industry Deep learning 020206 networking & telecommunications 02 engineering and technology Machine learning computer.software_genre Theoretical Computer Science Stochastic gradient descent Artificial Intelligence Hardware and Architecture 0202 electrical engineering electronic engineering information engineering Deep neural networks 020201 artificial intelligence & image processing Artificial intelligence business computer Software |
Zdroj: | Journal of Parallel and Distributed Computing. 137:65-76 |
ISSN: | 0743-7315 |
DOI: | 10.1016/j.jpdc.2019.10.004 |
Popis: | Deep learning has led to tremendous advancements in the field of Artificial Intelligence. One caveat, however, is the substantial amount of compute needed to train these deep learning models. Training a benchmark dataset like ImageNet on a single machine with a modern GPU can take up to a week and distributing training on multiple machines has been observed to drastically bring this time down. Recent work has brought down ImageNet training time to as low as 4 min by using a cluster of 2048 GPUs. This paper surveys the various algorithms and techniques used in distributed training and presents the current state of the art for a modern distributed training framework. More specifically, we explore the synchronous and asynchronous variants of distributed Stochastic Gradient Descent, various All Reduce gradient aggregation strategies and best practices for obtaining higher throughput and lower latency over a cluster such as mixed precision training, large batch training, and gradient compression. |
Databáze: | OpenAIRE |
Externí odkaz: |