Die-stacked DRAM caches for servers
Autor: | Stavros Volos, Babak Falsafi, Djordje Jevdjic |
---|---|
Rok vydání: | 2013 |
Předmět: |
Cache coloring
Computer science Pipeline burst cache 02 engineering and technology Parallel computing Cache-oblivious algorithm Cache pollution DRAM Cache computer.software_genre 01 natural sciences CAS latency Die (integrated circuit) Cache invalidation Die Stacking Scale-Out Processors Server Block (telecommunications) 0103 physical sciences 0202 electrical engineering electronic engineering information engineering Locality of reference Bandwidth Efficiency Cache algorithms 010302 applied physics Snoopy cache Hardware_MEMORYSTRUCTURES Locality Memory bandwidth General Medicine 020202 computer hardware & architecture Scale-Out Workloads Smart Cache Bus sniffing Operating system Page cache Cache computer Dram |
Zdroj: | ISCA |
ISSN: | 0163-5964 |
DOI: | 10.1145/2508148.2485957 |
Popis: | Recent research advocates using large die-stacked DRAM caches to break the memory bandwidth wall. Existing DRAM cache designs fall into one of two categories --- block-based and page-based. The former organize data in conventional blocks (e.g., 64B), ensuring low off-chip bandwidth utilization, but co-locate tags and data in the stacked DRAM, incurring high lookup latency. Furthermore, such designs suffer from low hit ratios due to poor temporal locality. In contrast, page-based caches, which manage data at larger granularity (e.g., 4KB pages), allow for reduced tag array overhead and fast lookup, and leverage high spatial locality at the cost of moving large amounts of data on and off the chip. This paper introduces Footprint Cache, an efficient die-stacked DRAM cache design for server processors. Footprint Cache allocates data at the granularity of pages, but identifies and fetches only those blocks within a page that will be touched during the page's residency in the cache --- i.e., the page's footprint. In doing so, Footprint Cache eliminates the excessive off-chip traffic associated with page-based designs, while preserving their high hit ratio, small tag array overhead, and low lookup latency. Cycle-accurate simulation results of a 16-core server with up to 512MB Footprint Cache indicate a 57% performance improvement over a baseline chip without a die-stacked cache. Compared to a state-of-the-art block-based design, our design improves performance by 13% while reducing dynamic energy of stacked DRAM by 24%. |
Databáze: | OpenAIRE |
Externí odkaz: |