Efficient and Systematic Partitioning of Large and Deep Neural Networks for Parallelization
Autor: | Haoran Wang, Chong Li, Hongxing Wang, Sheng Yang, Sébastien Limet, Thibaut Tachon, Sophie Robert |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | Euro-Par 2021: Parallel Processing ISBN: 9783030856649 Euro-Par |
DOI: | 10.1007/978-3-030-85665-6_13 |
Popis: | Deep neural networks (DNNs) are playing an increasingly important role in our daily life. Since the size of DNNs is continuously growing up, it is highly important to train them effectively by distributing computation on multiple connected devices. The efficiency of training depends on the quality of chosen parallelization strategy. Being able to find a good parallelization strategy for a DNN in a reasonable amount of time is not trivial. Previous research demonstrated the possibility to systematically generate good parallelization strategies. However, systematic partitioning still suffers from either a heavy preprocessing or poor quality of parallelization. In this paper, we take a purely symbolic analysis approach by leveraging the features of DNNs like dense tensor balanced computation. We propose the Flex-Edge Recursive Graph and the Double Recursive Algorithm, successfully limiting our parallelization strategy generation to a linear complexity with a good quality of parallelization strategy. The experiments show that our solution significantly reduces the parallelization strategy generation time from hours to seconds while maintaining the parallelization quality. |
Databáze: | OpenAIRE |
Externí odkaz: |