A Cost-Efficient FPGA Implementation of Tiny Transformer Model using Neural ODE

Autor: Okubo, Ikumi, Sugiura, Keisuke, Matsutani, Hiroki
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Transformer has been adopted to a wide range of tasks and shown to outperform CNNs and RNNs while it suffers from high training cost and computational complexity. To address these issues, a hybrid approach has become a recent research trend, which replaces a part of ResNet with an MHSA (Multi-Head Self-Attention). In this paper, we propose a lightweight hybrid model which uses Neural ODE (Ordinary Differential Equation) as a backbone instead of ResNet for 12.1$\times$ parameter reduction. For the STL10 dataset, the proposed model achieves 80.15% top-1 accuracy which is comparable to ResNet50. Then, the proposed model is deployed on a modest-sized FPGA device for edge computing. To further reduce FPGA resource utilization, the model is quantized following QAT (Quantization Aware Training) scheme instead of PTQ (Post Training Quantization) to suppress the accuracy loss. As a result, an extremely lightweight Transformer-based model can be implemented on resource-limited FPGAs. The weights of the feature extraction network are stored on-chip to minimize the memory transfer overhead, allowing faster inference. By eliminating the overhead of memory transfers, inference can be executed seamlessly, leading to accelerated inference. The proposed FPGA implementation achieves a 34.01$\times$ speedup for the backbone and MHSA parts, and it achieves an overall 9.85$\times$ speedup when taking into account software pre- and post-processing. It also achieves an overall 7.10$\times$ higher energy efficiency compared to the ARM Cortex-A53 CPU.
Databáze: arXiv