Controlling the Memory Subscription of Distributed Applications with a Task-Based Runtime System

Autor: Olivier Aumage, Samuel Thibault, Marc Sergent, David Goudin
Přispěvatelé: STatic Optimizations, Runtime Methods (STORM), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Centre d'études scientifiques et techniques d'Aquitaine (CESTA), Direction des Applications Militaires (DAM), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), plafrim, ANR-13-MONU-0007,SOLHAR,Solveurs pour architectures hétérogènes utilisant des supports d'exécution(2013), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB), Université de Bordeaux (UB), PlaFRIM, Sergent, Marc
Jazyk: angličtina
Rok vydání: 2016
Předmět:
Flat memory model
Computer science
Distributed computing
010103 numerical & computational mathematics
02 engineering and technology
Overlay
Static memory allocation
[INFO] Computer Science [cs]
01 natural sciences
Runtime system
distributed computing
Memory ordering
0202 electrical engineering
electronic engineering
information engineering

Interleaved memory
[INFO.INFO-DC] Computer Science [cs]/Distributed
Parallel
and Cluster Computing [cs.DC]

[INFO]Computer Science [cs]
0101 mathematics
Distributed shared memory
compressed linear algebra
Supercomputer
Memory map
memory control
Extended memory
Memory management
Shared memory
task-based run-time systems
Memory footprint
020201 artificial intelligence & image processing
Distributed memory
[INFO.INFO-DC]Computer Science [cs]/Distributed
Parallel
and Cluster Computing [cs.DC]
Zdroj: 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
21st International Workshop on High-Level Parallel Programming Models and Supportive Environments
21st International Workshop on High-Level Parallel Programming Models and Supportive Environments, May 2016, Chicago, United States. ⟨10.1109/IPDPSW.2016.105⟩
SIAM Conference on Parallel Processing for Scientific Computing (SIAM PP 2016)
SIAM Conference on Parallel Processing for Scientific Computing (SIAM PP 2016), Apr 2016, Paris, France. pp.318-327
IPDPS Workshops
DOI: 10.1109/IPDPSW.2016.105⟩
Popis: International audience; The ever-increasing supercomputer architectural complexity emphasizes the need for high-level parallel programming paradigms. Among such paradigms, task-based programming manages to abstract away much of the architecture complexity while efficiently meeting the performance challenge, even at large scale. Dynamic run-time systems are typically used to execute task-based applications, to schedule computation resource usage and memory allocations. While computation scheduling has been well studied, the dynamic management of memory resource subscription inside such run-times has however been little explored. This paper studies the cooperation between a task-based distributed application code and a run-time system engine to control the memory subscription levels throughout the execution. We show that the task paradigm allows to control the memory footprint of the application by throttling the task submission flow rate, striking a compromise between the performance benefits of anticipative task submission and the resulting memory consumption. We illustrate the benefits of our contribution on a compressed dense linear algebra distributed application.
Databáze: OpenAIRE