Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-Enabled Systems
Autor: | Hari Subramoni, Dip Sankar Banerjee, Khaled Hamidouche, Dhabaleswar K. Panda, Akshay Venkatesh, C-H. Chu |
---|---|
Rok vydání: | 2016 |
Předmět: |
Unpacking
020203 distributed computing Computer science 020209 energy 02 engineering and technology GPU cluster Parallel computing Execution time Bottleneck CUDA 0202 electrical engineering electronic engineering information engineering Callback General-purpose computing on graphics processing units Massively parallel |
Zdroj: | IPDPS |
Popis: | GPU accelerators are widely used in HPC clusters due to their massive parallelism and high throughput-per-watt. Data movement continues to be the major bottleneck on GPU clusters, more so when data is non-contiguous, which is common in scientific applications. CUDA-Aware MPI libraries optimize the non-contiguous data movement processing using latency oriented techniques such as using GPU kernels to accelerate the packing/unpacking operations. Although they optimize the latency of a single operation, the inherent restrictions of the designs limit their efficiency for throughput oriented patterns. Indeed, none of the existing designs fully exploit the massive parallelism of the GPUs to provide high throughput and efficient resources utilization by enabling maximal overlap. In this paper, we propose novel designs for CUDA-Aware MPI libraries to achieve efficient GPU resource utilization and maximal overlap between CPUs and GPUs for non-contiguous data processing and movement. The proposed designs take advantage of several CUDA features, such as Hyper-Q/multi-streams and callback function, to deliver high performance and efficiency. To the best of our knowledge, this is the first such study to provide high throughput and efficient resource utilization for non-contiguous MPI data processing and movement to/from GPUs. The performance evaluation with the proposed designs using DDTBench shows up to 54%, 67%, 61% performance improvement on the SPECFEM3D_oc, SPECFEM3D_cm and WRF_y_sa benchmarks respectively for intra-node inter-GPU ping-pong experiments. The proposed designs also deliver up to 33% improvement on the total execution time over the existing designs for the HaloExchange-based application kernel that models the communication pattern of the MeteoSwiss weather forecasting model over 32 GPU nodes on Wilkes GPU cluster. |
Databáze: | OpenAIRE |
Externí odkaz: |