Autor: |
Guney, I.A., Yildiz, A., Bayindir, I.U., Serdaroglu, K.C., Bayik, U., Kucuk, G. |
Přispěvatelé: |
Guney, I.A., Yildiz, A., Bayindir, I.U., Serdaroglu, K.C., Bayik, U., Kucuk, G., Yeditepe Üniversitesi |
Jazyk: |
angličtina |
Rok vydání: |
2015 |
Předmět: |
|
Popis: |
Inmulti- andmany-core processors, a shared Last Level Cache (LLC) is utilized to alleviate the performance problems resulting from long latency memory instructions. However, an unmanaged LLC may become quite useless when the running threads have conflicting interests. In one extreme, a thread can make benefit from every portion of the cache whereas, in the other end, another thread may just want to thrash the whole LLC. Recently, a variety of way-partitioning mechanisms are introduced to improve cache performance. Today, almost all of the studies utilize the Utility-based Cache Partitioning (UCP) algorithm as their allocation policy. However, the UCP look-ahead algorithm, although it provides a better utility measure than its greedy counterpart, requires a very complex hardware circuitry and dissipates a considerable amount of energy at the end of each decision period. In this study, we propose an offline supervised machine learning algorithm that replaces the UCP lookahead circuitry with a circuitry requiring almost negligible hardware and energy cost.Depending on the cache and processor configuration, our thorough analysis and simulation results show that the proposed mechanism reduces up to 5% of the overall transistor count and 5% of the overall processor energy without introducing any performance penalty. © Springer International Publishing Switzerland 2015. 30th International Conference on High Performance Computing, ISC 2015 -- 12 July 2015 through 16 July 2015 -- -- 158159 |
Databáze: |
OpenAIRE |
Externí odkaz: |
|