ClipFormer: Key-Value Clipping of Transformers on Memristive Crossbars for Write Noise Mitigation

Autor: Bhattacharjee, Abhiroop, Moitra, Abhishek, Panda, Priyadarshini
Rok vydání: 2024
Předmět:
Zdroj: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024
Druh dokumentu: Working Paper
DOI: 10.1109/TCAD.2024.3435762
Popis: Transformers have revolutionized various real-world applications from natural language processing to computer vision. However, traditional von-Neumann computing paradigm faces memory and bandwidth limitations in accelerating transformers owing to their massive model sizes. To this end, In-memory Computing (IMC) crossbars based on Non-volatile Memories (NVMs), due to their ability to perform highly parallelized Matrix-Vector-Multiplications (MVMs) with high energy-efficiencies, have emerged as a promising solution for accelerating transformers. However, analog MVM operations in crossbars introduce non-idealities, such as stochastic read & write noise, which affect the inference accuracy of the deployed transformers. Specifically, we find pre-trained Vision Transformers (ViTs) to be vulnerable on crossbars due to the impact of write noise on the dynamically-generated Key (K) and Value (V) matrices in the attention layers, an effect not accounted for in prior studies. We, thus, propose ClipFormer, a transformation on the K and V matrices during inference, to boost the non-ideal accuracies of pre-trained ViT models. ClipFormer requires no additional hardware and training overhead and is amenable to transformers deployed on any memristive crossbar platform. Our experiments on Imagenet-1k dataset using pre-trained DeiT-S transformers, subjected to standard training and variation-aware-training, show >10-40% higher non-ideal accuracies at the high write noise regime by applying ClipFormer.
Comment: 9 pages, 10 figures, 3 tables, 1 appendix
Databáze: arXiv