Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Angshuman Parashar"'
Autor:
Prasanth Chatarasi, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Tushar Krishna, Vivek Sarkar
Publikováno v:
ACM Transactions on Architecture and Code Optimization. 19:1-26
A spatial accelerator’s efficiency depends heavily on both its mapper and cost models to generate optimized mappings for various operators of DNN models. However, existing cost models lack a formal boundary over their input programs (operators) for
Map Space Exploration is the problem of finding optimized mappings of a Deep Neural Network (DNN) model on an accelerator. It is known to be extremely computationally expensive, and there has been active research looking at both heuristics and learni
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::50af22efb3a3e84ba2a8c4cfb814fc21
http://arxiv.org/abs/2210.03731
http://arxiv.org/abs/2210.03731
Publikováno v:
Abstract Proceedings of the 2022 ACM SIGMETRICS/IFIP PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems.
The high efficiency of domain-specific hardware accelerators for machine learning (ML) has come fromspecialization, with the trade-off of less configurability/ flexibility. There is growing interest in developingflexible ML accelerators to make them
Publikováno v:
2022 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS).
Publikováno v:
IEEE Computer Architecture Letters. 20:1-4
Dataflow and tile size choices, which we collectively refer to as mappings, dictate the efficiency (i.e., latency and energy) of DNN accelerators. Rapidly evolving DNN models is one of the major challenges for DNN accelerators since the optimal mappi
Publikováno v:
Synthesis Lectures on Computer Architecture. 15:1-164
MAESTRO: A Data-Centric Approach to Understand Reuse, Performance, and Hardware Cost of DNN Mappings
Autor:
Michael Pellauer, Prasanth Chatarasi, Angshuman Parashar, Tushar Krishna, Vivek Sarkar, Hyoukjun Kwon
Publikováno v:
IEEE Micro. 40:20-29
The efficiency of an accelerator depends on three factors-mapping, deep neural network (DNN) layers, and hardware-constructing extremely complicated design space of DNN accelerators. To demystify such complicated design space and guide the DNN accele
Autor:
Geonhwa Jeong, Tushar Krishna, Gokcen Kestor, Prasanth Chatarasi, Sivasankaran Rajamanickam, Roberto Gioiosa, Angshuman Parashar, Po-An Tsai
Publikováno v:
PACT
To meet the extreme compute demands for deep learning across commercial and scientific applications, dataflow accelerators are becoming increasingly popular. While these "domain-specific" accelerators are not fully programmable like CPUs and GPUs, th
Autor:
Christopher W. Fletcher, Po-An Tsai, Sitao Huang, Vikas Chandra, Kartik Hegde, Angshuman Parashar
Publikováno v:
ASPLOS
Modern day computing increasingly relies on specialization to satiate growing performance and efficiency requirements. A core challenge in designing such specialized hardware architectures is how to perform mapping space search, i.e., search for an o
Publikováno v:
ISPASS
This paper presents Sparseloop, the first infrastructure that implements an analytical design space exploration methodology for sparse tensor accelerators. Sparseloop comprehends a wide set of architecture specifications including various sparse opti