Zobrazeno 1 - 10
of 189
pro vyhledávání: '"Joel Emer"'
Publikováno v:
Scientific Reports, Vol 11, Iss 1, Pp 1-12 (2021)
Abstract As deep neural network (DNN) models grow ever-larger, they can achieve higher accuracy and solve more complex problems. This trend has been enabled by an increase in available compute power; however, efforts to continue to scale electronic p
Externí odkaz:
https://doaj.org/article/0d72cdbc9c1e4e1d9de42249a489e054
Publikováno v:
2022 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS).
Publikováno v:
Synthesis Lectures on Computer Architecture. 15:1-341
This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vis
Autor:
Joel Emer, Matthew Fojtik, C. Thomas Gray, Ben Keller, Stephen G. Tell, Priyanka Raina, Stephen W. Keckler, Alicia Klinefelter, William J. Dally, Brucek Khailany, Brian Zimmer, Jason Clemons, Rangharajan Venkatesan, Nan Jiang, Yanqing Zhang, Nathaniel Pinckney, Yakun Sophia Shao
Publikováno v:
IEEE Journal of Solid-State Circuits. 55:920-932
Custom accelerators improve the energy efficiency, area efficiency, and performance of deep neural network (DNN) inference. This article presents a scalable DNN accelerator consisting of 36 chips connected in a mesh network on a multi-chip-module (MC
Autor:
Elba Garza, Gururaj Saileshwar, Udit Gupta, Tianyi Liu, Abdulrahman Mahmoud, Saugata Ghose, Joel Emer
Publikováno v:
2021 ACM/IEEE Workshop on Computer Architecture Education (WCAE).
Publikováno v:
ISCA
Irregular applications, such as graph analytics and sparse linear algebra, exhibit frequent indirect, data-dependent accesses to single or short sequences of elements that cause high main memory traffic and limit performance. Data compression is a pr
Publikováno v:
ASPLOS
Sparse matrix-sparse matrix multiplication (spMspM) is at the heart of a wide range of scientific and machine learning applications. spMspM is inefficient on general-purpose architectures, making accelerators attractive. However, prior spMspM acceler
Autor:
Dirk Englund, Joel Emer, Liane Bernstein, Marin Soljacic, Alexander Sludds, Vivienne Sze, Ryan Hamerly
Publikováno v:
Physics and Simulation of Optoelectronic Devices XXIX.
Optical approaches to machine learning rely heavily on programmable linear photonic circuits. Since the performance and energy efficiency scale with size, a major challenge is overcoming scaling roadblocks to the photonic technology. Recently, we pro
Publikováno v:
ISPASS
This paper presents Sparseloop, the first infrastructure that implements an analytical design space exploration methodology for sparse tensor accelerators. Sparseloop comprehends a wide set of architecture specifications including various sparse opti
Publikováno v:
ISPASS
Due to the data and computation intensive nature of many popular data processing applications, e.g., deep neural networks (DNNs), a variety of accelerators have been proposed to improve performance and energy efficiency. As a result, computing system