Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection
Autor: | Yehia Elkhatib, Ben Taylor, Zheng Wang, Vicent Sanz Marco |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Machine translation Computer science Inference 02 engineering and technology computer.software_genre Machine Learning (cs.LG) Reduction (complexity) 0202 electrical engineering electronic engineering information engineering Computer Science - Performance Contextual image classification business.industry Model selection Deep learning 020206 networking & telecommunications 020202 computer hardware & architecture Performance (cs.PF) Recurrent neural network Computer Science - Distributed Parallel and Cluster Computing Hardware and Architecture Embedded system Artificial intelligence Distributed Parallel and Cluster Computing (cs.DC) business computer Software Data compression |
DOI: | 10.48550/arxiv.1911.04946 |
Popis: | Deep neural networks ( DNNs ) are becoming a key enabling technology for many application domains. However, on-device inference on battery-powered, resource-constrained embedding systems is often infeasible due to prohibitively long inferencing time and resource requirements of many DNNs. Offloading computation into the cloud is often unacceptable due to privacy concerns, high latency, or the lack of connectivity. While compression algorithms often succeed in reducing inferencing times, they come at the cost of reduced accuracy. This paper presents a new, alternative approach to enable efficient execution of DNNs on embedded devices. Our approach dynamically determines which DNN to use for a given input, by considering the desired accuracy and inference time. It employs machine learning to develop a low-cost predictive model to quickly select a pre-trained DNN to use for a given input and the optimization constraint. We achieve this by first off-line training a predictive model, and then using the learned model to select a DNN model to use for new, unseen inputs. We apply our approach to two representative DNN domains: image classification and machine translation. We evaluate our approach on a Jetson TX2 embedded deep learning platform and consider a range of influential DNN models including convolutional and recurrent neural networks. For image classification, we achieve a 1.8x reduction in inference time with a 7.52% improvement in accuracy, over the most-capable single DNN model. For machine translation, we achieve a 1.34x reduction in inference time over the most-capable single model, with little impact on the quality of translation. Comment: Accepted to be published at ACM TECS. arXiv admin note: substantial text overlap with arXiv:1805.04252 |
Databáze: | OpenAIRE |
Externí odkaz: |