Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Sohrabizadeh, Atefeh"'
Autor:
Bai, Yunsheng, Sohrabizadeh, Atefeh, Ding, Zijian, Liang, Rongjian, Li, Weikai, Wang, Ding, Ren, Haoxing, Sun, Yizhou, Cong, Jason
Publikováno v:
Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD (MLCAD '24), ACM, 2024, Article 2, 1-7
High-level synthesis (HLS) is an automated design process that transforms high-level code into hardware designs, enabling the rapid development of hardware accelerators. HLS relies on pragmas, which are directives inserted into the source code to gui
Externí odkaz:
http://arxiv.org/abs/2409.13138
There have been several recent works proposed to utilize model-based optimization methods to improve the productivity of using high-level synthesis (HLS) to design domain-specific architectures. They would replace the time-consuming performance estim
Externí odkaz:
http://arxiv.org/abs/2408.13270
Autor:
Qin, Zongyue, Bai, Yunsheng, Sohrabizadeh, Atefeh, Ding, Zijian, Hu, Ziniu, Sun, Yizhou, Cong, Jason
In recent years, domain-specific accelerators (DSAs) have gained popularity for applications such as deep learning and autonomous driving. To facilitate DSA designs, programmers use high-level synthesis (HLS) to compile a high-level description writt
Externí odkaz:
http://arxiv.org/abs/2406.09606
Autor:
Zhang, Shichang, Sohrabizadeh, Atefeh, Wan, Cheng, Huang, Zijie, Hu, Ziniu, Wang, Yewen, Yingyan, Lin, Cong, Jason, Sun, Yizhou
Graph neural networks (GNNs) are emerging for machine learning research on graph-structured data. GNNs achieve state-of-the-art performance on many tasks, but they face scalability challenges when it comes to real-world applications that have numerou
Externí odkaz:
http://arxiv.org/abs/2306.14052
Recent years have witnessed the growing popularity of domain-specific accelerators (DSAs), such as Google's TPUs, for accelerating various applications such as deep learning, search, autonomous driving, etc. To facilitate DSA designs, high-level synt
Externí odkaz:
http://arxiv.org/abs/2305.10838
In the past few years, domain-specific accelerators (DSAs), such as Google's Tensor Processing Units, have shown to offer significant performance and energy efficiency over general-purpose CPUs. An important question is whether typical software devel
Externí odkaz:
http://arxiv.org/abs/2209.02951
High-level synthesis (HLS) has freed the computer architects from developing their designs in a very low-level language and needing to exactly specify how the data should be transferred in register-level. With the help of HLS, the hardware designers
Externí odkaz:
http://arxiv.org/abs/2111.08848
SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for Graph Similarity Computation
While there have been many studies on hardware acceleration for deep learning on images, there has been a rather limited focus on accelerating deep learning applications involving graphs. The unique characteristics of graphs, such as the irregular me
Externí odkaz:
http://arxiv.org/abs/2111.05936
Sparse-Matrix Dense-Matrix multiplication (SpMM) is the key operator for a wide range of applications, including scientific computing, graph processing, and deep learning. Architecting accelerators for SpMM is faced with three challenges - (1) the ra
Externí odkaz:
http://arxiv.org/abs/2109.11081
Adopting FPGA as an accelerator in datacenters is becoming mainstream for customized computing, but the fact that FPGAs are hard to program creates a steep learning curve for software programmers. Even with the help of high-level synthesis (HLS), acc
Externí odkaz:
http://arxiv.org/abs/2009.14381