SEDAR: Soft Error Detection and Automatic Recovery in High Performance Computing Systems
Autor: | Diego Montezanti |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
transient faults
soft errors detection process replication automatic recovery silent data corruption hpc applications multicore clusters fault injection system-level checkpoint user-level checkpoint Computer engineering. Computer hardware TK7885-7895 Electronic computers. Computer science QA75.5-76.95 |
Zdroj: | Journal of Computer Science and Technology, Vol 20, Iss 2, Pp e14-e14 (2020) |
Druh dokumentu: | article |
ISSN: | 1666-6046 1666-6038 16666038 |
DOI: | 10.24215/16666038.20.e14 |
Popis: | Reliability and fault tolerance have become aspects of growing relevance in the field of HPC, due to the increased probability that faults of different kinds will occur in these systems. This is fundamentally due to the increasing complexity of the processors, in the search to improve performance, which leads to a rise in the scale of integration and in the number of components that work near their technological limits, being increasingly prone to failures. Another factor that affects is the growth in the size of parallel systems to obtain greater computational power, in terms of number of cores and processing nodes. As applications demand longer uninterrupted computation times, the impact of faults grows, due to the cost of relaunching an execution that was aborted due to the occurrence of a fault or concluded with erroneous results. Consequently, it is necessary to run these applications on highly available and reliable systems, requiring strategies capable of providing detection, protection and recovery against faults. In the next years it is planned to reach Exa-scale, in which there will be supercomputers with millions of processing cores, capable of performing on the order of 1018 operations per second. This is a great window of opportunity for HPC applications, but it also increases the risk that they will not complete their executions. Recent studies show that, as systems continue to include more processors, the Mean Time Between Errors decreases, resulting in higher failure rates and increased risk of corrupted results; large parallel applications are expected to deal with errors that occur every few minutes, requiring external help to progress efficiently. Silent Data Corruptions are the most dangerous errors that can occur, since they can generate incorrect results in programs that appear to execute correctly. Scientific applications and large-scale simulations are the most affected, making silent error handling the main challenge towards resilience in HPC. In message passing applications, a silent error, affecting a single task, can produce a pattern of corruption that spreads to all communicating processes; in the worst case scenario, the erroneous final results cannot be detected at the end of the execution and will be taken as correct. Since scientific applications have execution times of the order of hours or even days, it is essential to find strategies that allow applications to reach correct solutions in a bounded time, despite the underlying failures. These strategies also prevent energy consumption from skyrocketing, since if they are not used, the executions should be launched again from the beginning. However, the most popular parallel programming models used in supercomputers lack support for fault tolerance. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |