A Fast and Flexible FPGA-based Accelerator for Natural Language Processing Neural Networks
Autor: | Suyeon Hur, Seongmin Na, Dongup Kwon, Joonsung Kim, Andrew Boutros, Eriko Nurvitadhi, Jangwoo Kim |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | ACM Transactions on Architecture and Code Optimization. 20:1-24 |
ISSN: | 1544-3973 1544-3566 |
DOI: | 10.1145/3564606 |
Popis: | Deep neural networks (DNNs) have become key solutions in the natural language processing (NLP) domain. However, the existing accelerators customized for their narrow target models cannot support diverse NLP models. Therefore, naively running complex NLP models on the existing accelerators often leads to very marginal performance improvements. For these reasons, architects are now in dire need of a new accelerator that can run various NLP models while taking its full performance potential. In this article, we propose FlexRun, an FPGA-based modular accelerator to efficiently support diverse and complex NLP models. First, we identify key components commonly used by NLP models and implement them on top of a current state-of-the-art FPGA-based accelerator. Next, FlexRun conducts an in-depth design space exploration to find the best accelerator architecture for a target NLP model. Last, FlexRun automatically reconfigures the accelerator based on the exploration results. Our FlexRun design outperforms the current state-of-the-art FPGA-based accelerator by 1.21×–2.73× and 1.15×–1.50× for BERT and GPT2, respectively. Compared to Nvidia’s V100 GPU, FlexRun achieves 2.69× higher performance on average for various BERT and GPT2 models. |
Databáze: | OpenAIRE |
Externí odkaz: |