Systolic-based pyramidal neuron accelerator blocks for convolutional neural network

Autor: Hossam O. Ahmed, Mohamed Dessouky, Maged Ghoneima
Rok vydání: 2019
Předmět:
Zdroj: Microelectronics Journal. 89:16-22
ISSN: 0026-2692
DOI: 10.1016/j.mejo.2019.04.017
Popis: The dramatic evolution in the Deep Learning (DL) algorithms required to alter the silicon architecture fabric of the conventional parallel processing units to increase the efficiency of accelerationg the enormous feature data while achieving reasonable low power consumption levels, especially for the Convolutional Neural Networks (CNN). In this paper, three proposed Pyramidal Neuron Accelerator Architecture (PNAA) units have been designed and optimized for accelerating the convolutional layer of the Convolutional Neural Networks (CNN). The three proposed PNAA units are suggested to replace the conventional generic embedded Digital Signal Processing (DSP) blocks in the silicon architecture fabric of the FPGA chips, that are responsible for the dot-operation functions. The three proposed PNAA units represent the intensively-used neuron operations for the most common kernel filter dimensions in CNN systems as a proof of concept. The proposed PNAA units have been compiled using the TSMC 130 nm technology using the Synopsys DC compiler software. The Front-End analysis for different characteristic PVTs showed that the maximum achieved processing speed for the proposed PNAA units could reach a computational speed of 20.9 Giga Operation per Seconds (GPOS) at a frequency of 409.84 MHz and a predicted power consumption equal to 58.729 mW.
Databáze: OpenAIRE