Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA
Autor: | Aleksandr Drozd, Jens Domke, Truong Thao Nguyen, Lingqi Zhang, Ryousei Takano, Haoyu Zhang, Mohamed Wahib, Satoshi Matsuoka |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
010302 applied physics Computer Science - Machine Learning Speedup Computer science Data parallelism business.industry Deep learning Concurrency Pipeline (computing) Parallel computing 010501 environmental sciences Supercomputer 01 natural sciences Machine Learning (cs.LG) Data modeling Concurrency control Memory management Computer Science - Distributed Parallel and Cluster Computing 0103 physical sciences Out-of-core algorithm Distributed Parallel and Cluster Computing (cs.DC) Artificial intelligence business 0105 earth and related environmental sciences |
Zdroj: | SC |
Popis: | The dedicated memory of hardware accelerators can be insufficient to store all weights and/or intermediate states of large deep learning models. Although model parallelism is a viable approach to reduce the memory pressure issue, significant modification of the source code and considerations for algorithms are required. An alternative solution is to use out-of-core methods instead of, or in addition to, data parallelism. We propose a performance model based on the concurrency analysis of out-of-core training behavior, and derive a strategy that combines layer swapping and redundant recomputing. We achieve an average of 1.52x speedup in six different models over the state-of-the-art out-of-core methods. We also introduce the first method to solve the challenging problem of out-of-core multi-node training by carefully pipelining gradient exchanges and performing the parameter updates on the host. Our data parallel out-of-core solution can outperform complex hybrid model parallelism in training large models, e.g. Megatron-LM and Turning-NLG. ACM/IEEE Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC'20) |
Databáze: | OpenAIRE |
Externí odkaz: |