Popis: |
In this paper, we explore an impact of GPU/CPU scaling of a state-of-the-art AI embedded device on its energy consumption and AI performance. We use Nvidia Jetson TX2 as an experiment device thanks to its tractability to scale GPU/CPU and modify AI framework and libraries. Via extensive experiment in various ML (Machine Learning) scenarios, i.e., face recognition and objective detection, we demonstrate a clear tradeoff between GPU/CPU scaling, energy consumption (of GPU/CPU as well as entire device) and training/inference speed. Finally, we envision a future work aiming to optimize processing and networking resources simultaneously at an extended scenario that multiple AI embedded devices cooperate with each other for a common AI application. |