Object Detection of Road Assets Using Transformer-Based YOLOX with Feature Pyramid Decoder on Thai Highway Panorama

Autor: Teerapong Panboonyuen, Sittinun Thongbai, Weerachai Wongweeranimit, Phisan Santitamnont, Kittiwan Suphan, Chaiyut Charoenphon
Jazyk: angličtina
Rok vydání: 2021
Předmět:
Zdroj: Information, Vol 13, Iss 1, p 5 (2021)
Druh dokumentu: article
ISSN: 2078-2489
DOI: 10.3390/info13010005
Popis: Due to the various sizes of each object, such as kilometer stones, detection is still a challenge, and it directly impacts the accuracy of these object counts. Transformers have demonstrated impressive results in various natural language processing (NLP) and image processing tasks due to long-range modeling dependencies. This paper aims to propose an exceeding you only look once (YOLO) series with two contributions: (i) We propose to employ a pre-training objective to gain the original visual tokens based on the image patches on road asset images. By utilizing pre-training Vision Transformer (ViT) as a backbone, we immediately fine-tune the model weights on downstream tasks by joining task layers upon the pre-trained encoder. (ii) We apply Feature Pyramid Network (FPN) decoder designs to our deep learning network to learn the importance of different input features instead of simply summing up or concatenating, which may cause feature mismatch and performance degradation. Conclusively, our proposed method (Transformer-Based YOLOX with FPN) learns very general representations of objects. It significantly outperforms other state-of-the-art (SOTA) detectors, including YOLOv5S, YOLOv5M, and YOLOv5L. We boosted it to 61.5% AP on the Thailand highway corpus, surpassing the current best practice (YOLOv5L) by 2.56% AP for the test-dev data set.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje