A Scalable Multi- TeraOPS Deep Learning Processor Core for AI Trainina and Inference

Autor: Shih-Hsien Lo, Brian W. Curran, Jinwook Oh, Howard M. Haynie, Vijavalakshmi Srinivasan, Lel Chang, Fanchieh Yee, Tina Babinsky, Joel Abraham Silberman, George D. Gristede, Matthew M. Ziegler, Gary W. Maier, Bruce M. Fleischer, Michael R. Scheuermann, Nianzheng Cao, Ankur Agrawal, Ching Zhou, Chia-Yu Chen, Silvia Melitta Mueller, Jungwook Choi, Naigang Wang, Kailash Gopalakrishnan, Thomas W. Fox, Sunil Shukla, Swagath Venkataramani, Michael J. Klaiber, Christos Vezyrtzis, Pierce Chuang, Dongsoo Lee, Michael A. Guillorn, Pong-Fei Lu
Rok vydání: 2018
Předmět:
Zdroj: VLSI Circuits
Popis: A multi-TOPS AI core is presented for acceleration of deep learning training and inference in systems from edge devices to data centers. With a programmable architecture and custom ISA, this engine achieves >90% sustained utilization across the range of neural network topologies by employing a dataflow architecture and an on-chip scratchpad hierarchy. Compute precision is optimized at 16b floating point (fp 16) for high model accuracy in training and inference as well as 1b/2b (bi-nary/ternary) integer for aggressive inference performance. At 1.5 GHz, the AI core prototype achieves 1.5 TFLOPS fp 16, 12 TOPS ternary, or 24 TOPS binary peak performance in 14nm CMOS.
Databáze: OpenAIRE