ParaX : Bandwidth-Efficient Instance Assignment for DL on Multi-NUMA Many-Core CPUs.

Autor: Zhang, Yiming, Yin, Lujia, Li, Dongsheng, Peng, Yuxing, Lu, Kai
Předmět:
Zdroj: IEEE Transactions on Computers; Nov2022, Vol. 71 Issue 11, p3032-3046, 15p
Abstrakt: Commercial clouds now heavily use CPUs in DL (deep learning) because there are large numbers of CPUs which would otherwise sit idle during off-peak periods. Following the trend, CPU vendors have not only released high-performance many-core CPUs but also developed efficient math kernel libraries. However, current DL platforms cannot scale well to a large number of CPU cores, making many-core CPUs inefficient in DL computation. We analyze the memory access patterns of various layers and identify the root cause of the low scalability, i.e., the per-layer barriers that are implicitly imposed by current platforms which assign one single instance (i.e., one batch of input data) to a CPU. The barriers cause severe memory bandwidth contention and CPU starvation in the access-intensive layers (like activation and BN). This paper presents a novel approach called ParaX, which boosts the performance of DL on multi-NUMA (non-uniform memory access) many-core CPUs by effectively alleviating bandwidth contention and CPU starvation. Our key idea is to assign one instance to each CPU core instead of to the entire CPU, so as to remove the per-layer barriers on the executions of the many cores. ParaX designs an ultralight scheduling policy which sufficiently overlaps the access-intensive layers with the compute-intensive ones to avoid contention, and proposes a NUMA-aware gradient server mechanism for training which leverages shared memory to substantially reduce the overhead of per-iteration parameter synchronization. We have implemented ParaX on MXNet. Extensive evaluation on a two-NUMA Intel 8280 CPU shows that ParaX significantly improves the training/inference throughput for all tested models (for image recognition and natural language processing) by $1.73\times \sim 2.93{\times}$ 1. 73 × ∼ 2. 93 × . [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index