Multi-GPU Immersed Boundary Method Hemodynamics Simulations.

Autor: Ames J; Department of Computer Science, Duke University, Durham, NC USA., Puleri DF; Department of Biomedical Engineering, Duke University, Durham, NC USA., Balogh P; Department of Biomedical Engineering, Duke University, Durham, NC USA., Gounley J; Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN USA., Draeger EW; Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA USA., Randles A; Department of Biomedical Engineering, Duke University, Durham, NC USA.
Jazyk: angličtina
Zdroj: Journal of computational science [J Comput Sci] 2020 Jul; Vol. 44. Date of Electronic Publication: 2020 Jun 14.
DOI: 10.1016/j.jocs.2020.101153
Abstrakt: Large-scale simulations of blood flow that resolve the 3D deformation of each comprising cell are increasingly popular owing to algorithmic developments in conjunction with advances in compute capability. Among different approaches for modeling cell-resolved hemodynamics, fluid structure interaction (FSI) algorithms based on the immersed boundary method are frequently employed for coupling separate solvers for the background fluid and the cells within one framework. GPUs can accelerate these simulations; however, both current pre-exascale and future exascale CPU-GPU heterogeneous systems face communication challenges critical to performance and scalability. We describe, to our knowledge, the largest distributed GPU-accelerated FSI simulations of high hematocrit cell-resolved flows with over 17 million red blood cells. We compare scaling on a fat node system with six GPUs per node and on a system with a single GPU per node. Through comparison between the CPU- and GPU-based implementations, we identify the costs of data movement in multiscale multi-grid FSI simulations on heterogeneous systems and show it to be the greatest performance bottleneck on the GPU.
Databáze: MEDLINE