Scalable Multi-FPGA Acceleration for Large RNNs with Full Parallelism Levels.

Autor: Dongup Kwon, Suyeon Hur, Hamin Jang, Nurvitadhi, Eriko, Jangwoo Kim
Předmět:
Zdroj: DAC: Annual ACM/IEEE Design Automation Conference; 2020, Issue 57, p1142-1147, 6p
Abstrakt: The increasing size of recurrent neural networks (RNNs) makes it hard to meet the growing demand for realtime AI services. For low-latency RNN serving, FPGA-based accelerators can leverage specialized architectures with optimized dataflow. However, they also suffer from severe HW under-utilization when partitioning RNNs, and thus fail to obtain the scalable performance. In this paper, we identify the performance bottlenecks of existing RNN partitioning strategies. Then, we propose a novel RNN partitioning strategy to achieve the scalable multi-FPGA acceleration for large RNNs. First, we introduce three parallelism levels and exploit them by partitioning weight matrices, matrix/vector operations, and layers. Second, we examine the performance impact of collective communications and software pipelining to derive more accurate and optimal distribution results. We prototyped an FPGA-based acceleration system using multiple Intel high-end FPGAs, and our partitioning scheme allows up to 2.4x faster inference of modern RNN workloads than conventional partitioning methods. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index