Autor: |
Kumar, D. Naresh, Akilandeswari, A. |
Předmět: |
|
Zdroj: |
AIP Conference Proceedings; 2024, Vol. 2853 Issue 1, p1-8, 8p |
Abstrakt: |
The purpose of this research is to create a cutting-edge YOLO v4 (You Only Look Once v4) neural network model for detecting objects. Evaluate it alongside TensorFlow SSD MobileNet in terms of precision and latency. Twenty representative photographs were taken from a wide range of categories and labels. In order to get the necessary samples for this research, G-power calculation was used. The maximum allowed margin of error was set at 0.5 and the minimum power of analysis required was set at 0.8. The YOLO v4 algorithm outperformed the SSD MobileNet algorithm by a significant margin (81 percent) when predicting the locations of items in a picture (76 percent). The p-value for significance is 0.781. When compared to the SSD MobileNet method, the YOLO v4 algorithm looks to be more precise. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|