Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Pentecost, Lillian"'
Autor:
Pentecost, Lillian, Hankin, Alexander, Donato, Marco, Hempstead, Mark, Wei, Gu-Yeon, Brooks, David
Repeated off-chip memory accesses to DRAM drive up operating power for data-intensive applications, and SRAM technology scaling and leakage power limits the efficiency of embedded memories. Future on-chip storage will need higher density and energy e
Externí odkaz:
http://arxiv.org/abs/2109.01188
Autor:
Sharifi, Mohammad Mehdi, Pentecost, Lillian, Rajaei, Ramin, Kazemi, Arman, Lou, Qiuwen, Wei, Gu-Yeon, Brooks, David, Ni, Kai, Hu, X. Sharon, Niemier, Michael, Donato, Marco
The memory wall bottleneck is a key challenge across many data-intensive applications. Multi-level FeFET-based embedded non-volatile memories are a promising solution for denser and more energy-efficient on-chip memory. However, reliable multi-level
Externí odkaz:
http://arxiv.org/abs/2106.11757
Autor:
Dutta, Sourav, Ye, Huacheng, Khanna, Abhishek, Luo, Yuan-Chun, Pentecost, Lillian, Khandker, Akif A., Chakraborty, Wriddhi, Wei, Gu-Yeon, Brooks, David, Niemier, Michael, Hu, Xiaobo Sharon, Yu, Shimeng, Ni, Kai, Datta, Suman
Silicon ferroelectric field-effect transistors (FeFETs) with low-k interfacial layer (IL) between ferroelectric gate stack and silicon channel suffers from high write voltage, limited write endurance and large read-after-write latency due to early IL
Externí odkaz:
http://arxiv.org/abs/2105.11078
Autor:
Tambe, Thierry, Hooper, Coleman, Pentecost, Lillian, Jia, Tianyu, Yang, En-Yu, Donato, Marco, Sanh, Victor, Whatmough, Paul N., Rush, Alexander M., Brooks, David, Wei, Gu-Yeon
Transformer-based language models such as BERT provide significant accuracy improvement for a multitude of natural language processing (NLP) tasks. However, their hefty computational and memory demands make them challenging to deploy to resource-cons
Externí odkaz:
http://arxiv.org/abs/2011.14203
Autor:
Mattson, Peter, Cheng, Christine, Coleman, Cody, Diamos, Greg, Micikevicius, Paulius, Patterson, David, Tang, Hanlin, Wei, Gu-Yeon, Bailis, Peter, Bittorf, Victor, Brooks, David, Chen, Dehao, Dutta, Debojyoti, Gupta, Udit, Hazelwood, Kim, Hock, Andrew, Huang, Xinyuan, Ike, Atsushi, Jia, Bill, Kang, Daniel, Kanter, David, Kumar, Naveen, Liao, Jeffery, Ma, Guokai, Narayanan, Deepak, Oguntebi, Tayo, Pekhimenko, Gennady, Pentecost, Lillian, Reddi, Vijay Janapa, Robie, Taylor, John, Tom St., Tabaru, Tsuguchika, Wu, Carole-Jean, Xu, Lingjie, Yamazaki, Masafumi, Young, Cliff, Zaharia, Matei
Machine learning (ML) needs industry-standard performance benchmarks to support design and competitive evaluation of the many emerging software and hardware solutions for ML. But ML training presents three unique benchmarking challenges absent from o
Externí odkaz:
http://arxiv.org/abs/1910.01500
Autor:
Gupta, Udit, Reagen, Brandon, Pentecost, Lillian, Donato, Marco, Tambe, Thierry, Rush, Alexander M., Wei, Gu-Yeon, Brooks, David
Recurrent neural networks (RNNs) are becoming the de facto solution for speech recognition. RNNs exploit long-term temporal relationships in data by applying repeated, learned transformations. Unlike fully-connected (FC) layers with single vector mat
Externí odkaz:
http://arxiv.org/abs/1908.08976
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
DAC: Annual ACM/IEEE Design Automation Conference; 2018, Issue 55, p667-672, 6p
Autor:
Reagen, Brandon, Gupta, Udit, Pentecost, Lillian, Whatmough, Paul, Sae Kyu Lee, Mulholland, Niamh, Brooks, David, Gu-Yeon Wei
Publikováno v:
DAC: Annual ACM/IEEE Design Automation Conference; 2018, Issue 55, p151-156, 6p