Zobrazeno 1 - 10
of 260
pro vyhledávání: '"Lu, Wei D."'
Decoder-only Transformer models such as GPT have demonstrated exceptional performance in text generation, by autoregressively predicting the next token. However, the efficacy of running GPT on current hardware systems is bounded by low compute-to-mem
Externí odkaz:
http://arxiv.org/abs/2310.09385
Autor:
Wu, Yuting, Wang, Qiwen, Wang, Ziyu, Wang, Xinxin, Ayyagari, Buvna, Krishnan, Siddarth, Chudzik, Michael, Lu, Wei D.
Publikováno v:
Adv. Mater.35 (2023) 2305465
The need for deep neural network (DNN) models with higher performance and better functionality leads to the proliferation of very large models. Model training, however, requires intensive computation time and energy. Memristor-based compute-in-memory
Externí odkaz:
http://arxiv.org/abs/2305.14547
Autor:
Wang, Ziyu, Wu, Yuting, Park, Yongmo, Yoo, Sangmin, Wang, Xinxin, Eshraghian, Jason K., Lu, Wei D.
Analog compute-in-memory (CIM) systems are promising for deep neural network (DNN) inference acceleration due to their energy efficiency and high throughput. However, as the use of DNNs expands, protecting user input privacy has become increasingly i
Externí odkaz:
http://arxiv.org/abs/2304.11056
Event-based cameras are inspired by the sparse and asynchronous spike representation of the biological visual system. However, processing the event data requires either using expensive feature descriptors to transform spikes into frames, or using spi
Externí odkaz:
http://arxiv.org/abs/2303.10770
Autor:
Sun, Pao-Sheng Vincent, Titterton, Alexander, Gopiani, Anjlee, Santos, Tim, Basu, Arindam, Lu, Wei D., Eshraghian, Jason K.
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method f
Externí odkaz:
http://arxiv.org/abs/2211.10725
Publikováno v:
IEEE Transactions on Emerging Topics in Computing (2023)
In-memory computing (IMC) systems have great potential for accelerating data-intensive tasks such as deep neural networks (DNNs). As DNN models are generally highly proprietary, the neural network architectures become valuable targets for attacks. In
Externí odkaz:
http://arxiv.org/abs/2209.02792
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs). Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes. These spikes emitted by memristive dy
Externí odkaz:
http://arxiv.org/abs/2206.12992
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms. However, these networks face challenges when trained using error backpropagation, due to the absence
Externí odkaz:
http://arxiv.org/abs/2202.07221
Autor:
Eshraghian, Jason K., Lu, Wei D.
Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision. In theory, a wide dynamic range state-space enables mul
Externí odkaz:
http://arxiv.org/abs/2201.11915
Autor:
Lammie, Corey, Eshraghian, Jason K., Li, Chenqi, Amirsoleimani, Amirali, Genov, Roman, Lu, Wei D., Azghadi, Mostafa Rahimi
The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic fea
Externí odkaz:
http://arxiv.org/abs/2201.06703