Hash Food Image Retrieval Based on Enhanced Vision Transformer

Autor: CAO Pindan, MIN Weiqing, SONG Jiajun, SHENG Guorui, YANG Yancun, WANG Lili, JIANG Shuqiang
Jazyk: English<br />Chinese
Rok vydání: 2024
Předmět:
Zdroj: Shipin Kexue, Vol 45, Iss 10, Pp 1-8 (2024)
Druh dokumentu: article
ISSN: 1002-6630
DOI: 10.7506/spkx1002-6630-20231231-270
Popis: Food image retrieval, a major task in food computing, has garnered extensive attention in recent years. However, it faces two primary challenges. First, food images exhibit fine-grained characteristics, implying that visual differences between different food categories may be subtle and often can only be observable in local regions of the image. Second, food images contain abundant semantic information, such as ingredients and cooking methods, whose extraction and utilization are crucial for enhancing the retrieval performance. To address these issues, this paper proposes an enhanced ViT hash network (EVHNet) based on a pre-trained Vision Transformer (ViT) model. Given the fine-grained nature of food images, a local feature enhancement module enabling the network to learn more representative features was designed in EVHNet based on convolutional structure. To better leverage the semantic information in food images, an aggregated semantic feature module aggregating the information based on class token features was designed in EVHNet. The proposed EVHNet model was evaluated under three popular hash image retrieval frameworks, namely greedy hash (GreedyHash), central similarity quantization (CSQ), and deep polarized network (DPN), and compared with four mainstream network models, AlexNet, ResNet50, ViT-B_32, and ViT-B_16. Experimental results on the Food-101, Vireo Food-172, and UEC Food-256 food datasets demonstrated that the EVHNet model outperformed other models in terms of comprehensive retrieval accuracy.
Databáze: Directory of Open Access Journals