Effect of Architecture and Inference Parameters of Artificial Neural Network Models in the Detection Task on Energy Demand.

Autor: Tomiło, Paweł, Oleszczuk, Piotr, Laskowska, Agnieszka, Wilczewska, Weronika, Gnapowski, Ernest
Předmět:
Zdroj: Energies (19961073); Nov2024, Vol. 17 Issue 21, p5417, 18p
Abstrakt: Artificial neural network models for the task of detection are used in many fields and find various applications. Models of this kind require adequate computational resources and thus require adequate energy expenditure. The increase in the number of parameters, the complexity of architectures, and the need to process large data sets significantly increase energy consumption, which is becoming a key sustainability challenge. Optimization of computing and the development of energy-efficient hardware technologies are essential to reduce the energy footprint of these models. This article examines the effect of the type of model, as well as its parameters, on energy consumption during inference. For this purpose, sensors built into the graphics card were used, and software was developed to measure the energy demand of the graphics card for different architectures of YOLO models (v8, v9, v10), as well as for different batch and model sizes. This study showed that the increase in energy demand is not linearly dependent on batch size. After a certain level of batch size, the energy demand begins to decrease. This dependence does not occur only for n/t size models. Optimum utilization of computing power due to the number of processed images for the studied models occurs at the maximum studied batch size. In addition, tests were conducted on an embedded device. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index
Nepřihlášeným uživatelům se plný text nezobrazuje