Zobrazeno 1 - 10
of 153
pro vyhledávání: '"Pande, Partha P."'
Autor:
Pfromm, Lukas, Kanani, Alish, Sharma, Harsh, Solanki, Parth, Tervo, Eric, Park, Jaehyun, Doppa, Janardhan Rao, Pande, Partha Pratim, Ogras, Umit Y.
Rapidly evolving artificial intelligence and machine learning applications require ever-increasing computational capabilities, while monolithic 2D design technologies approach their limits. Heterogeneous integration of smaller chiplets using a 2.5D s
Externí odkaz:
http://arxiv.org/abs/2410.09188
Transformers have revolutionized deep learning and generative modeling to enable unprecedented advancements in natural language processing tasks and beyond. However, designing hardware accelerators for executing transformer models is challenging due
Externí odkaz:
http://arxiv.org/abs/2408.03397
Autor:
Sen, Ovishake, Ogbogu, Chukwufumnanya, Dehghanzadeh, Peyman, Doppa, Janardhan Rao, Bhunia, Swarup, Pande, Partha Pratim, Chatterjee, Baibhab
Traditional digital implementations of neural accelerators are limited by high power and area overheads, while analog and non-CMOS implementations suffer from noise, device mismatch, and reliability issues. This paper introduces a CMOS Look-Up Table
Externí odkaz:
http://arxiv.org/abs/2406.05282
Processing-in-memory (PIM) has emerged as an enabler for the energy-efficient and high-performance acceleration of deep learning (DL) workloads. Resistive random-access memory (ReRAM) is one of the most promising technologies to implement PIM. Howeve
Externí odkaz:
http://arxiv.org/abs/2403.19073
Autor:
Dhingra, Pratyush, Ogbogu, Chukwufumnanya, Joardar, Biresh Kumar, Doppa, Janardhan Rao, Kalyanaraman, Ananth, Pande, Partha Pratim
Resistive random-access memory (ReRAM)-based processing-in-memory (PIM) architecture is an attractive solution for training Graph Neural Networks (GNNs) on edge platforms. However, the immature fabrication process and limited write endurance of ReRAM
Externí odkaz:
http://arxiv.org/abs/2401.10522
Transformers have revolutionized deep learning and generative modeling, enabling unprecedented advancements in natural language processing tasks. However, the size of transformer models is increasing continuously, driven by enhanced capabilities acro
Externí odkaz:
http://arxiv.org/abs/2312.11750
Autor:
Wu, Xueying, Hanson, Edward, Wang, Nansu, Zheng, Qilin, Yang, Xiaoxuan, Yang, Huanrui, Li, Shiyu, Cheng, Feng, Pande, Partha Pratim, Doppa, Janardhan Rao, Chakrabarty, Krishnendu, Li, Hai
Resistive random access memory (ReRAM)-based processing-in-memory (PIM) architectures have demonstrated great potential to accelerate Deep Neural Network (DNN) training/inference. However, the computational accuracy of analog PIM is compromised due t
Externí odkaz:
http://arxiv.org/abs/2310.12182
Autor:
Joardar, Biresh Kumar, Doppa, Janardhan Rao, Li, Hai, Chakrabarty, Krishnendu, Pande, Partha Pratim
Training machine learning (ML) models at the edge (on-chip training on end user devices) can address many pressing challenges including data privacy/security, increase the accessibility of ML applications to different parts of the world by reducing t
Externí odkaz:
http://arxiv.org/abs/2111.09272
Autor:
Yang, Xiaoxuan, Belakaria, Syrine, Joardar, Biresh Kumar, Yang, Huanrui, Doppa, Janardhan Rao, Pande, Partha Pratim, Chakrabarty, Krishnendu, Li, Hai
Resistive random-access memory (ReRAM) is a promising technology for designing hardware accelerators for deep neural network (DNN) inferencing. However, stochastic noise in ReRAM crossbars can degrade the DNN inferencing accuracy. We propose the desi
Externí odkaz:
http://arxiv.org/abs/2109.05437
Autor:
Deshwal, Aryan, Belakaria, Syrine, Bhat, Ganapati, Doppa, Janardhan Rao, Pande, Partha Pratim
Mobile system-on-chips (SoCs) are growing in their complexity and heterogeneity (e.g., Arm's Big-Little architecture) to meet the needs of emerging applications, including games and artificial intelligence. This makes it very challenging to optimally
Externí odkaz:
http://arxiv.org/abs/2105.09282