Flexpoint: Predictive Numerics for Deep Learning
Autor: | Tristania Webb, Valentina Popescu, Evren Tumer, Xin Wang, Marcel Nassar |
---|---|
Rok vydání: | 2018 |
Předmět: |
Artificial neural network
Computer science business.industry Deep learning 020206 networking & telecommunications 02 engineering and technology Convolutional neural network Data type Management algorithm Prediction algorithms Significand Computer engineering 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business |
Zdroj: | ARITH |
Popis: | Deep learning has been undergoing rapid growth in recent years thanks to its state-of-the-art performance across a wide range of real-world applications. Traditionally neural networks were trained in IEEE-754 binary64 or binary32 format, a common practice in general scientific computing. However, the unique computational requirements of deep neural network training workloads allow for much more efficient and inexpensive alternatives, unleashing a new wave of numerical innovations powering specialized computing hardware. We previously presented Flexpoint, a blocked fixed-point data type combined with a novel predictive exponent management algorithm designed to support training of deep networks without modifications, aiming at a seamless replacement of the binary32 widely in practice today. We showed that Flexpoint with 16-bit mantissa and 5-bit shared exponent (flex16+S) achieved numerical parity to binary32 in training a number of convolutional neural networks. In the current paper we review the continuing trend of predictive numerics enhancing deep neural network training in specialized computing devices such as the Intel®N ervana ™ Neural Network Processor. |
Databáze: | OpenAIRE |
Externí odkaz: |