Minimizing Power for Neural Network Training with Logarithm-Approximate Floating-Point Multiplier
Autor: | TaiYu Cheng, Jaehoon Yu, Masanori Hashimoto |
---|---|
Rok vydání: | 2019 |
Předmět: |
Logarithm
Artificial neural network Computer science Dynamic range Computation 02 engineering and technology 010501 environmental sciences 01 natural sciences 020202 computer hardware & architecture Floating point multiplier 0202 electrical engineering electronic engineering information engineering Multiplier (economics) Electrical efficiency Scaling Algorithm 0105 earth and related environmental sciences |
Zdroj: | PATMOS |
DOI: | 10.1109/patmos.2019.8862162 |
Popis: | This paper proposes to adopt logarithm-approximate multiplier (LAM) for multiply-accumulate (MAC) computation in neural network (NN) training engine, where LAM approximates a floating-point multiplication as an addition resulting in smaller delay, fewer gates, and lower power consumption. Our implementation of NN training engine for a 2-D classification dataset achieves 10% speed-up and 2.5X and 2.3X efficiency improvement in power and area, respectively. LAM is also highly compatible with conventional bit-width scaling (BWS). When BWS is applied with LAM in four test datasets, more than 5.2X power efficiency improvement is achievable with only 1% accuracy degradation, where 2.3X improvement originates from LAM. |
Databáze: | OpenAIRE |
Externí odkaz: |