TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks
Autor: | Jie Lin, Chuan-Sheng Foo, Arthur Herbout, Lile Cai, Anne-Maelle Barneche, Vijay Chandrasekhar, Mohamed M. Sabry Aly |
---|---|
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
Network architecture Computer Science - Machine Learning Computer science business.industry Deep learning 020208 electrical & electronic engineering Bandwidth (signal processing) Process (computing) Computer Science - Neural and Evolutionary Computing 02 engineering and technology Energy consumption Multi-objective optimization Convolutional neural network 020202 computer hardware & architecture Machine Learning (cs.LG) Computer engineering 0202 electrical engineering electronic engineering information engineering Artificial intelligence Neural and Evolutionary Computing (cs.NE) business Energy (signal processing) |
Zdroj: | ISLPED |
DOI: | 10.48550/arxiv.1811.12065 |
Popis: | Embedded deep learning platforms have witnessed two simultaneous improvements. First, the accuracy of convolutional neural networks (CNNs) has been significantly improved through the use of automated neural-architecture search (NAS) algorithms to determine CNN structure. Second, there has been increasing interest in developing hardware accelerators for CNNs that provide improved inference performance and energy consumption compared to GPUs. Such embedded deep learning platforms differ in the amount of compute resources and memory-access bandwidth, which would affect performance and energy consumption of CNNs. It is therefore critical to consider the available hardware resources in the network architecture search. To this end, we introduce TEA-DNN, a NAS algorithm targeting multi-objective optimization of execution time, energy consumption, and classification accuracy of CNN workloads on embedded architectures. TEA-DNN leverages energy and execution time measurements on embedded hardware when exploring the Pareto-optimal curves across accuracy, execution time, and energy consumption and does not require additional effort to model the underlying hardware. We apply TEA-DNN for image classification on actual embedded platforms (NVIDIA Jetson TX2 and Intel Movidius Neural Compute Stick). We highlight the Pareto-optimal operating points that emphasize the necessity to explicitly consider hardware characteristics in the search process. To the best of our knowledge, this is the most comprehensive study of Pareto-optimal models across a range of hardware platforms using actual measurements on hardware to obtain objective values. Comment: Accepted by ISLPED2019 |
Databáze: | OpenAIRE |
Externí odkaz: |