Strong scaling of general-purpose molecular dynamics simulations on GPUs
Autor: | Pak Lui, Filippo Spiga, Sharon C. Glotzer, Jens Glaser, Jaime A. Millan, Joshua A. Anderson, Trung Dac Nguyen, David C. Morse |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2014 |
Předmět: |
Remote direct memory access
Computer science Dissipative particle dynamics FOS: Physical sciences General Physics and Astronomy Domain decomposition methods Double-precision floating-point format 02 engineering and technology Parallel computing Computational Physics (physics.comp-ph) Physics and Astronomy(all) 010402 general chemistry 021001 nanoscience & nanotechnology 01 natural sciences 0104 chemical sciences Computational science CUDA Hardware and Architecture Code (cryptography) Node (circuits) 0210 nano-technology Scaling Physics - Computational Physics |
Popis: | We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, arXiv:1308.5587). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, J. Comp. Phys. 117, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., J. Comp. Phys. 227, 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3,375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5x. 30 pages, 14 figures |
Databáze: | OpenAIRE |
Externí odkaz: |