Memory Reduction using a Ring Abstraction over GPU RDMA for Distributed Quantum Monte Carlo Solver
Autor: | Eduardo D'Azevedo, Kevin Huck, Oscar Hernandez, Hartmut Kaiser, Weile Wei, Arghya Chatterjee |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Ring (mathematics) Condensed Matter - Materials Science Remote direct memory access Strongly Correlated Electrons (cond-mat.str-el) Computer science Computation Concurrency Condensed Matter - Superconductivity Materials Science (cond-mat.mtrl-sci) FOS: Physical sciences Network interface Parallel computing Solver Data structure Superconductivity (cond-mat.supr-con) Condensed Matter - Strongly Correlated Electrons Computer Science - Distributed Parallel and Cluster Computing Overhead (computing) Distributed Parallel and Cluster Computing (cs.DC) |
Zdroj: | PASC |
Popis: | Scientific applications that run on leadership computing facilities often face the challenge of being unable to fit leading science cases onto accelerator devices due to memory constraints (memory-bound applications). In this work, the authors studied one such US Department of Energy mission-critical condensed matter physics application, Dynamical Cluster Approximation (DCA++), and this paper discusses how device memory-bound challenges were successfully reduced by proposing an effective "all-to-all" communication method -- a ring communication algorithm. This implementation takes advantage of acceleration on GPUs and remote direct memory access (RDMA) for fast data exchange between GPUs. Additionally, the ring algorithm was optimized with sub-ring communicators and multi-threaded support to further reduce communication overhead and expose more concurrency, respectively. The computation and communication were also analyzed by using the Autonomic Performance Environment for Exascale (APEX) profiling tool, and this paper further discusses the performance trade-off for the ring algorithm implementation. The memory analysis on the ring algorithm shows that the allocation size for the authors' most memory-intensive data structure per GPU is now reduced to 1/p of the original size, where p is the number of GPUs in the ring communicator. The communication analysis suggests that the distributed Quantum Monte Carlo execution time grows linearly as sub-ring size increases, and the cost of messages passing through the network interface connector could be a limiting factor. |
Databáze: | OpenAIRE |
Externí odkaz: |