Minimizing Power for Neural Network Training with Logarithm-Approximate Floating-Point Multiplier

Autor: TaiYu Cheng, Jaehoon Yu, Masanori Hashimoto
Rok vydání: 2019
Předmět:
Zdroj: PATMOS
DOI: 10.1109/patmos.2019.8862162
Popis: This paper proposes to adopt logarithm-approximate multiplier (LAM) for multiply-accumulate (MAC) computation in neural network (NN) training engine, where LAM approximates a floating-point multiplication as an addition resulting in smaller delay, fewer gates, and lower power consumption. Our implementation of NN training engine for a 2-D classification dataset achieves 10% speed-up and 2.5X and 2.3X efficiency improvement in power and area, respectively. LAM is also highly compatible with conventional bit-width scaling (BWS). When BWS is applied with LAM in four test datasets, more than 5.2X power efficiency improvement is achievable with only 1% accuracy degradation, where 2.3X improvement originates from LAM.
Databáze: OpenAIRE