Rina: Enhancing Ring-AllReduce with In-network Aggregation in Distributed Model Training

Autor: Chen, Zixuan, Liu, Xuandong, Li, Minglin, Hu, Yinfan, Mei, Hao, Xing, Huifeng, Wang, Hao, Shi, Wanxin, Liu, Sen, Xu, Yang
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Parameter Server (PS) and Ring-AllReduce (RAR) are two widely utilized synchronization architectures in multi-worker Deep Learning (DL), also referred to as Distributed Deep Learning (DDL). However, PS encounters challenges with the ``incast'' issue, while RAR struggles with problems caused by the long dependency chain. The emerging In-network Aggregation (INA) has been proposed to integrate with PS to mitigate its incast issue. However, such PS-based INA has poor incremental deployment abilities as it requires replacing all the switches to show significant performance improvement, which is not cost-effective. In this study, we present the incorporation of INA capabilities into RAR, called RAR with In-Network Aggregation (Rina), to tackle both the problems above. Rina features its agent-worker mechanism. When an INA-capable ToR switch is deployed, all workers in this rack run as one abstracted worker with the help of the agent, resulting in both excellent incremental deployment capabilities and better throughput. We conducted extensive testbed and simulation evaluations to substantiate the throughput advantages of Rina over existing DDL training synchronization structures. Compared with the state-of-the-art PS-based INA methods ATP, Rina can achieve more than 50\% throughput with the same hardware cost.
Comment: To appear in ICNP 2024. Preview version only
Databáze: arXiv