Characterizing and Understanding Distributed GNN Training on GPUs

Autor: Haiyang Lin, Mingyu Yan, Xiaocheng Yang, Mo Zou, Wenming Li, Xiaochun Ye, Dongrui Fan
Rok vydání: 2022
Předmět:
DOI: 10.48550/arxiv.2204.08150
Popis: Graph neural network (GNN) has been demonstrated to be a powerful model in many domains for its effectiveness in learning over graphs. To scale GNN training for large graphs, a widely adopted approach is distributed training which accelerates training using multiple computing nodes. Maximizing the performance is essential, but the execution of distributed GNN training remains preliminarily understood. In this work, we provide an in-depth analysis of distributed GNN training on GPUs, revealing several significant observations and providing useful guidelines for both software optimization and hardware optimization.
Comment: To Appear in IEEE Computer Architecture Letters (CAL) 2022
Databáze: OpenAIRE