Few‐shot object detection via class encoding and multi‐target decoding

Autor: Xueqiang Guo, Hanqing Yang, Mohan Wei, Xiaotong Ye, Yu Zhang
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: IET Cyber-systems and Robotics, Vol 5, Iss 2, Pp n/a-n/a (2023)
Druh dokumentu: article
ISSN: 2631-6315
DOI: 10.1049/csy2.12088
Popis: Abstract The task of few‐shot object detection is to classify and locate objects through a few annotated samples. Although many studies have tried to solve this problem, the results are still not satisfactory. Recent studies have found that the class margin significantly impacts the classification and representation of the targets to be detected. Most methods use the loss function to balance the class margin, but the results show that the loss‐based methods only have a tiny improvement on the few‐shot object detection problem. In this study, the authors propose a class encoding method based on the transformer to balance the class margin, which can make the model pay more attention to the essential information of the features, thus increasing the recognition ability of the sample. Besides, the authors propose a multi‐target decoding method to aggregate RoI vectors generated from multi‐target images with multiple support vectors, which can significantly improve the detection ability of the detector for multi‐target images. Experiments on Pascal visual object classes (VOC) and Microsoft Common Objects in Context datasets show that our proposed Few‐Shot Object Detection via Class Encoding and Multi‐Target Decoding significantly improves upon baseline detectors (average accuracy improvement is up to 10.8% on VOC and 2.1% on COCO), achieving competitive performance. In general, we propose a new way to regulate the class margin between support set vectors and a way of feature aggregation for images containing multiple objects and achieve remarkable results. Our method is implemented on mmfewshot, and the code will be available later.
Databáze: Directory of Open Access Journals