22.1 A 1.1V 16GB 640GB/s HBM2E DRAM with a Data-Bus Window-Extension Technique and a Synergetic On-Die ECC Scheme
Autor: | Jin-Guk Kim, Kijun Lee, Junyong Noh, Seungseob Lee, Jung-Bae Lee, Seung-Duk Baek, Jungyu Lee, Sin-Ho Kim, Soo-Young Kim, Hye-In Choi, Ki-Chul Chun, Beomyong Kil, Sanguhn Cha, So-Young Kim, Jae-Won Park, Ryu Ye-Sin, Jun Jin Kong, Dong-Hak Shin, Byung-Kyu Ho, Hyuk-Jun Kwon, Baekmin Lim, Park Yong-Sik, Seouk-Kyu Choi, Chi-Sung Oh, Kyomin Sohn, Myungkyu Lee, Kwang-Il Park, Young-Yong Byun, Jae-Wook Lee, Bo-Tak Lim, Seong-Jin Cho, Jong-Pil Son, Yong-ki Kim, Nam Sung Kim, S.J. Ahn |
---|---|
Rok vydání: | 2020 |
Předmět: |
010302 applied physics
Scheme (programming language) Random access memory business.industry Computer science Process (computing) 02 engineering and technology 01 natural sciences 020202 computer hardware & architecture Power (physics) Stack (abstract data type) Embedded system 0103 physical sciences 0202 electrical engineering electronic engineering information engineering Bandwidth (computing) business Throughput (business) computer Dram System bus computer.programming_language |
Zdroj: | ISSCC |
DOI: | 10.1109/isscc19947.2020.9063110 |
Popis: | Rapidly evolving artificial intelligence (Al) technology, such as deep learning, has been successfully deployed in various applications: such as image recognition, health care, and autonomous driving. Such rapid evolution and successful deployment of Al technology have been possible owing to the emergence of accelerators, such as GPUs and TPUs, that have a higher data throughput. This, in turn, requires an enhanced memory system with large capacity and high bandwidth [1]; HBM has been the most preferred high-bandwidth memory technology due to its high-speed and low-power characteristics, and 1024 IOs facilitated by 2.5D silicon interposer technology, as well as large capacity realized by through-silicon via (TSV) stack technology [2]. Previous-generation HBM2 supports 8GB capacity with a stack of 8 DRAM dies (i.e., 8-high stack) and 341GB/s (2.7Gb/s/pin) bandwidth [3]. The HBM industry trend has been a speed improvement of 15~20% every year, while capacity increases by 1.5-2x every two years. In this paper, we present a 16GB HBM2E with circuit and design techniques to increase its bandwidth up to 640GB/s (5Gb/s/pin), while providing stable bit-cell operation in the 2nd generation of a 10nm DRAM process: featuring (1) a data-bus window-extension technique to cope with reduced $t_{cco}$ , (2) a power delivery network (PDN) designed for stable operation at a high speed, (3) a synergetic on-die ECC scheme to reliably provide large capacity, and (4) an MBIST solution to efficiently test large capacity memory at a high speed. |
Databáze: | OpenAIRE |
Externí odkaz: |