A Novel Systolic Parallel Hardware Architecture for the FPGA Acceleration of Feedforward Neural Networks
Autor: | Jose V. Frances-Villora, Leandro D. Medus, Manuel Bataller-Mompeán, Taras Iakymchuk, Alfredo Rosado-Muñoz |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Hardware architecture
Floating point General Computer Science Artificial neural network Computer science Clock rate Activation function General Engineering Sistemes informàtics Autoencoder Arquitectura d'ordinadors Computational science neural network acceleration FPGA implementation deep neural networks Multilayer perceptron Feedforward neural networks - FFNN Feedforward neural network Xarxes neuronals (Informàtica) General Materials Science lcsh:Electrical engineering. Electronics. Nuclear engineering lcsh:TK1-9971 systolic hardware architecture |
Zdroj: | IEEE Access, Vol 7, Pp 76084-76103 (2019) Medus, Leandro Daniel Iakymchuk, Taras Francés Villora, José Vicente Bataller Mompean, Manuel Rosado Muñoz, Alfredo 2019 A Novel Systolic Parallel Hardware Architecture for the FPGA Acceleration of Feedforward Neural Networks Ieee Access 7 76084 76103 RODERIC. Repositorio Institucional de la Universitat de Valéncia instname |
ISSN: | 2169-3536 |
Popis: | New chips for machine learning applications appear, they are tuned for a specific topology, being efficient by using highly parallel designs at the cost of high power or large complex devices. However, the computational demands of deep neural networks require flexible and efficient hardware architectures able to fit different applications, neural network types, number of inputs, outputs, layers, and units in each layer, making the migration from software to hardware easy. This paper describes novel hardware implementing any feedforward neural network (FFNN): multilayer perceptron, autoencoder, and logistic regression. The architecture admits an arbitrary input and output number, units in layers, and a number of layers. The hardware combines matrix algebra concepts with serial-parallel computation. It is based on a systolic ring of neural processing elements (NPE), only requiring as many NPEs as neuron units in the largest layer, no matter the number of layers. The use of resources grows linearly with the number of NPEs. This versatile architecture serves as an accelerator in real-time applications and its size does not affect the system clock frequency. Unlike most approaches, a single activation function block (AFB) for the whole FFNN is required. Performance, resource usage, and accuracy for several network topologies and activation functions are evaluated. The architecture reaches 550 MHz clock speed in a Virtex7 FPGA. The proposed implementation uses 18-bit fixed point achieving similar classification performance to a floating point approach. A reduced weight bit size does not affect the accuracy, allowing more weights in the same memory. Different FFNN for Iris and MNIST datasets were evaluated and, for a real-time application of abnormal cardiac detection, a ${\hspace{-0.112em}\times \hspace{-0.112em}}256$ acceleration was achieved. The proposed architecture can perform up to 1980 Giga operations per second (GOPS), implementing the multilayer FFNN of up to 3600 neurons per layer in a single chip. The architecture can be extended to bigger capacity devices or multi-chip by the simple NPE ring extension. |
Databáze: | OpenAIRE |
Externí odkaz: |
načítá se...