Efficient Communication/Computation Overlap with MPI+OpenMP Runtimes Collaboration
Autor: | Marc Pérache, Guillaume Papauré, Marc Sergent, Mario Dagrada, Patrick Carribault, Julien Jaeger |
---|---|
Rok vydání: | 2018 |
Předmět: |
020203 distributed computing
Computer science Asynchronous communication 020204 information systems Computation Mpi openmp Scalability 0202 electrical engineering electronic engineering information engineering 02 engineering and technology Thread (computing) Parallel computing Software_PROGRAMMINGTECHNIQUES Implementation |
Zdroj: | Euro-Par 2018: Parallel Processing ISBN: 9783319969824 Euro-Par |
DOI: | 10.1007/978-3-319-96983-1_40 |
Popis: | Overlap network communications and computations is a major requirement to ensure scalability of HPC applications on future exascale machines. To this purpose the de-facto MPI standard provides non-blocking routines for asynchronous communication progress. In various implementations, a dedicated progress thread (PT) is deployed on the host CPU to actually achieve this overlap. However, current PT solutions struggle to find a balance between efficient detection of network events and minimal impact on the application computations. In this paper we propose a solution inspired from the PT approach which benefits from idle time of compute threads to make MPI communication progress in background. We implement our idea in the context of MPI+OpenMP collaboration using the OpenMP Tools interface which will be part of the OpenMP 5.0 standard. Our solution shows an overall performance gain on unbalanced workloads such as the AMG CORAL benchmark. |
Databáze: | OpenAIRE |
Externí odkaz: |