Zobrazeno 1 - 10
of 77
pro vyhledávání: '"Madhu Mutyam"'
Autor:
Sumitha George, Vijaykrishnan Narayanan, Hariram Thirucherai Govindarajan, John Sampson, Jagadish B. Kotra, Madhu Mutyam, S. R. Swamy Saranam Chongala, Mahmut Kandemir
Publikováno v:
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 39:3881-3892
So-called “tagless” caches have become common as a means to deal with the vast L4 last-level caches (LLCs) enabled by increasing device density, emerging memory technologies, and advanced integration capabilities (e.g., 3-D). Tagless schemes ofte
Publikováno v:
IEEE Transactions on Sustainable Computing. 5:468-484
In the recent past, devising algorithms for concurrent data structures has been driven by the need for scalability. Further, there is an increased traction across the industry towards power efficient concurrent data structure designs. In this context
Publikováno v:
New Generation Computing. 38:187-212
We propose algorithms to perform operations concurrently on treaps in a shared memory multi-core environment. Concurrent treaps hold the advantage of using nodes’ priority for maintaining the height of the treaps. To achieve synchronization, concur
Publikováno v:
ACM Transactions on Design Automation of Electronic Systems. 24:1-23
The emerging Die-stacking technology enables DRAM to be used as a cache to break the “Memory Wall” problem. Recent studies have proposed to use DRAM as a victim cache in both CPU and GPU memory hierarchies to improve performance. DRAM caches are
Publikováno v:
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 37:2485-2496
Die-stacking technology enables the use of a high density DRAM as a cache. Major processor vendors have recently started using these stacked DRAM modules as the last level cache of their products. These stacked DRAM modules provide high bandwidth wit
Publikováno v:
ICCD
Multiple cores in a tiled multi-core processor are connected using a network-on-chip mechanism. All these cores share the last-level cache (LLC). For large-sized LLCs, generally, non-uniform cache architecture design is considered, where the LLC is s
Publikováno v:
ICS
Modern NVMe SSDs are widely deployed in diverse domains due to characteristics like high performance, robustness, and energy efficiency. It has been observed that the impact of interference among the concurrently running workloads on their overall re
Publikováno v:
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. :1-1
Publikováno v:
ICCD
Formal modeling and analysis of a victim DRAM cache has already been discussed in the existing literature. These works use interacting state machines to model states and transitions of a victim DRAM cache. In this work, we address model-code conforma
Publikováno v:
IEEE Computer Architecture Letters. 17:213-216
Hardware-based DRAM cache techniques for GPGPUs propose to use GPU DRAM as a cache of the host (system) memory. However, these approaches do not exploit the opportunity of allocating store-before-load data (data that is written before being read by G