Domino: Eliminating Communication in LLM Training via Generic Tensor Slicing and Overlapping

Autor: Wang, Guanhua, Zhang, Chengming, Shen, Zheyu, Li, Ang, Ruwase, Olatunji
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Given the popularity of generative AI, Large Language Models (LLMs) often consume hundreds or thousands of GPUs for parallelizing and accelerating the training process. Communication overhead becomes more pronounced when training LLMs at scale. To eliminate communication overhead in distributed LLM training, we propose Domino, which provides a generic scheme to hide communication behind computation. By breaking data dependency of a single batch training into smaller independent pieces, Domino pipelines these independent pieces training and provides generic strategy of fine-grained communication and computation overlapping. Extensive results show that, comparing with Megatron-LM, Domino achieves up to 1.3x speedup for LLM training on Nvidia DGX-H100 GPUs.
Comment: 12 pages
Databáze: arXiv