PACEMAKER: Avoiding HeART attacks in storage clusters with disk-adaptive redundancy
Autor: | Kadekodi, Saurabh, Maturana, Francisco, Subramanya, Suhas Jayaram, Yang, Juncheng, Rashmi, K. V., Ganger, Gregory R. |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2020, (pp. 369-385) |
Druh dokumentu: | Working Paper |
Popis: | Data redundancy provides resilience in large-scale storage clusters, but imposes significant cost overhead. Substantial space-savings can be realized by tuning redundancy schemes to observed disk failure rates. However, prior design proposals for such tuning are unusable in real-world clusters, because the IO load of transitions between schemes overwhelms the storage infrastructure (termed transition overload). This paper analyzes traces for millions of disks from production systems at Google, NetApp, and Backblaze to expose and understand transition overload as a roadblock to disk-adaptive redundancy: transition IO under existing approaches can consume 100% cluster IO continuously for several weeks. Building on the insights drawn, we present PACEMAKER, a low-overhead disk-adaptive redundancy orchestrator. PACEMAKER mitigates transition overload by (1) proactively organizing data layouts to make future transitions efficient, and (2) initiating transitions proactively in a manner that avoids urgency while not compromising on space-savings. Evaluation of PACEMAKER with traces from four large (110K-450K disks) production clusters show that the transition IO requirement decreases to never needing more than 5% cluster IO bandwidth (0.2-0.4% on average). PACEMAKER achieves this while providing overall space-savings of 14-20% and never leaving data under-protected. We also describe and experiment with an integration of PACEMAKER into HDFS. Comment: Published in USENIX Symposium on Operating Systems Design and Implementation (OSDI) 2020 |
Databáze: | arXiv |
Externí odkaz: |