Code Acceleration Using Memristor-Based Approximate Matrix Multiplier: Application to Convolutional Neural Networks
Autor: | Mohsen Nourazar, Farshad Merrikh-Bayat, Vahid Rashtchi, Ali Azarpeyvand |
---|---|
Rok vydání: | 2018 |
Předmět: |
Speedup
Computer science Pipeline (computing) 02 engineering and technology Memristor 01 natural sciences Convolutional neural network Computational science law.invention Computer Science::Hardware Architecture law 0103 physical sciences 0202 electrical engineering electronic engineering information engineering Electrical and Electronic Engineering 010302 applied physics Artificial neural network business.industry Deep learning 020202 computer hardware & architecture Hardware and Architecture Multiplier (economics) Multiplication Artificial intelligence business Software MNIST database |
Zdroj: | IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 26:2684-2695 |
ISSN: | 1557-9999 1063-8210 |
Popis: | In this paper, we demonstrate the feasibility of building a memristor-based approximate accelerator to be used in cooperation with general-purpose $\times 86$ processors. First, an integrated full system simulator is developed for simultaneous simulation of any multicrossbar architecture as an accelerator for $\times 86$ processors, which is performed by coupling a cycle accurate Marss $\times 86$ processor simulator with the Ngspice mixed-level/mixed-signal circuit simulator. Then, a novel mixed-signal memristor-based architecture is presented for multiplying floating-point signed complex numbers. The presented multiplier is extended for accelerating convolutional neural networks and finally, it is tightly integrated with the pipeline of a generic $\times 86$ processor. To validate the accelerator, first it is utilized for multiplying different matrices that vary in size and distribution. Then, it is used as an accelerator for accelerating the tiny-dnn, an open-source C++ implementation of deep learning neural networks. The memristor-based accelerator provides more than $100\times $ speedup and energy saving for a $64\times 64$ matrix-matrix multiplication, with an accuracy of 90%. Using the accelerated tiny-dnn for the MNIST database classification more than $10\times $ speedup and energy saving along with 95.51% pattern recognition accuracy is achieved. |
Databáze: | OpenAIRE |
Externí odkaz: |