Adapt-Net: A Unified Object Detection Framework for Mobile Augmented Reality

Autor: Xiangyun Zeng, Siok Yee Tan, Mohammad Faidzul Nasrudin
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: IEEE Access, Vol 12, Pp 120788-120803 (2024)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2024.3447043
Popis: Object detection is a crucial task in mobile augmented reality (MAR), where achieving both speed and accuracy with limited computational resources is essential. However, applying object detection models to new domains or reducing the model size tends to lower their performance. To address this problem, this research introduced a unified object detection framework called Adapt-Net. This framework incorporates contrastive learning techniques for unsupervised domain adaptation, a teacher-student structure generative compressed model with masking, and deep mutual learning of student models, all built upon the YOLOv8 architecture. Adapt-Net’s key novelty lies in its unified framework that combines three models: two student models and one teacher model. Each model comprises a feature-extracting backbone and an adapter network. The student models backbone are trained using deep mutual learning and contrastive learning loss to ensure domain-invariant feature generation. Unsupervised domain adaptation and masked generative knowledge distillation modules facilitate knowledge transfer from the teacher to the student models, enhancing their ability to generalize to unfamiliar objects. The use of masked generative knowledge distillation, which guides the student models to reconstruct the teacher’s features from a masked input in a generative manner, rather than merely imitating the output. This generative approach improves the student models’ representation capabilities. Adapt-Net enables the student models to not only learn domain-invariant features but also enhance their generalization capabilities to new objects. Extensive experiments conducted on benchmark datasets demonstrate that our proposed approach surpasses state-of-the-art object detection methods by 6.8 mAP score in terms of detection accuracy on the Microsoft COCO dataset. Notably, the model size remains a compact 3.2M, enabling fast inference speeds, lower computational resource consumption, and enhanced resilience to domain variations. Adapt-Net represents a promising and efficient approach to object detection that combines accuracy with efficiency.
Databáze: Directory of Open Access Journals