Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Dmitry Durnov"'
Publikováno v:
2018 IEEE 4th International Conference on Computer and Communications (ICCC).
In this paper, we examine usage of the Process Management Interface (PMI) during MPI_Init. Specifically, how PMI is used to exchange address information between peer processes in an MPI job. As node and core counts continue to increase in HPC systems
Autor:
Marc Gamell Balmana, Rashid Kaleem, Alexander Sannikov, María Jesús Garzarán, Dmitry Durnov, Akhil Langer, Surabhi Jain
Publikováno v:
SC
Collective operations are used in MPI programs to express common communication patterns, collective computations, or synchronization. In many collectives, such as MPI_Allreduce, the intra-node component of the collective lies on the critical path, as
Autor:
Tom Elken, Taisuke Boku, Pradeep Sivakumar, Larry Meadows, Alexander Sannikov, Dmitry Durnov, Toshihiro Hanawa, Masashi Horikoshi, Edward Mascarenhas, James P. Erwin
Publikováno v:
HPC Asia Workshops
This paper provides results on scaling Barrier and Allreduce to 8192 nodes on a cluster of Intel® Xeon Phi™ processors installed at the University of Tokyo and the University of Tsukuba. We will describe the effects of OS and platform noise on the
Autor:
Alexander Sannikov, Sangmin Seo, Yanfei Guo, Ken Raffenetti, Paul Fischer, Tomislav Janjusic, Thilina Rathnayake, Michael Alan Blocksome, Jithin Jose, Matthew Otten, Hajime Fujita, Sergey Oblomov, Sayantan Sur, Masamichi Takagi, Pavan Balaji, Masayuki Hatanaka, Misun Min, Abdelhalim Amer, Paul Coffman, Wesley Bland, Akhil Langer, Michael Chuvelev, Dmitry Durnov, Charles J. Archer, Min Si, Lena Oden, Gengbin Zheng, Xin Zhao
Publikováno v:
SC
This paper provides an in-depth analysis of the software overheads in the MPI performance-critical path and exposes mandatory performance overheads that are unavoidable based on the MPI-3.1 specification. We first present a highly optimized implement