CIMAR, NIMAR, and LMMA: Novel algorithms for thread and memory migrations in user space on NUMA systems using hardware counters
Autor: | Juan Ángel Lorenzo, Francisco F. Rivera, Oscar G. Lorenzo, José C. Cabaleiro, Tomás F. Pena, Ruben Laso |
---|---|
Přispěvatelé: | Universidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Información, Universidade de Santiago de Compostela. Departamento de Electrónica e Computación |
Rok vydání: | 2022 |
Předmět: |
Scheduling
Computer Networks and Communications business.industry Computer science Memory migration Context (computing) Process (computing) 020207 software engineering Linux kernel 02 engineering and technology Thread (computing) Task (computing) NUMA Thread migration Hardware and Architecture 020204 information systems 0202 electrical engineering electronic engineering information engineering Benchmark (computing) User space Hardware counters business Algorithm Queue Software Computer hardware |
Zdroj: | Minerva. Repositorio Institucional de la Universidad de Santiago de Compostela instname |
ISSN: | 0167-739X |
Popis: | This paper introduces two novel algorithms for thread migrations, named CIMAR (Core-aware Interchange and Migration Algorithm with performance Record –IMAR–) and NIMAR (Node-aware IMAR), and a new algorithm for the migration of memory pages, LMMA (Latency-based Memory pages Migration Algorithm), in the context of Non-Uniform Memory Access (NUMA) systems. This kind of system has complex memory hierarchies that present a challenging problem in extracting the best possible performance, where thread and memory mapping play a critical role. The presented algorithms gather and process the information provided by hardware counters to make decisions about the migrations to be performed, trying to find the optimal mapping. They have been implemented as a user space tool that looks for improving the system performance, particularly in, but not restricted to, scenarios where multiple programs with different characteristics are running. This approach has the advantage of not requiring any modification on the target programs or the Linux kernel while keeping a low overhead. Two different benchmark suites have been used to validate our algorithms: The NAS parallel benchmark, mainly devoted to computational routines, and the LevelDB database benchmark focused on read–write operations. These benchmarks allow us to illustrate the influence of our proposal in these two important types of codes. Note that those codes are state-of-the-art implementations of the routines, so few improvements could be initially expected. Experiments have been designed and conducted to emulate three different scenarios: a single program running in the system with full resources, an interactive server where multiple programs run concurrently varying the availability of resources, and a queue of tasks where granted resources are limited. The proposed algorithms have been able to produce significant benefits, especially in systems with higher latency penalties for remote accesses. When more than one benchmark is executed simultaneously, performance improvements have been obtained, reducing execution times up to 60%. In this kind of situation, the behaviour of the system is more critical, and the NUMA topology plays a more relevant role. Even in the worst case, when isolated benchmarks are executed using the whole system, that is, just one task at a time, the performance is not degraded This research work has received financial support from the Ministerio de Ciencia e Innovación, Spain within the project PID2019-104834GB-I00. It was also funded by the Consellería de Cultura, Educación e Ordenación Universitaria of Xunta de Galicia (accr. 2019–2022, ED431G 2019/04 and reference competitive group 2019–2021, ED431C 2018/19) SI |
Databáze: | OpenAIRE |
Externí odkaz: |