Contrasting Dual Transformer Architectures for Multi-Modal Remote Sensing Image Retrieval

Autor: Mohamad M. Al Rahhal, Mohamed Abdelkader Bencherif, Yakoub Bazi, Abdullah Alharbi, Mohamed Lamine Mekhalfi
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Applied Sciences, Vol 13, Iss 1, p 282 (2022)
Druh dokumentu: article
ISSN: 2076-3417
DOI: 10.3390/app13010282
Popis: Remote sensing technology has advanced rapidly in recent years. Because of the deployment of quantitative and qualitative sensors, as well as the evolution of powerful hardware and software platforms, it powers a wide range of civilian and military applications. This in turn leads to the availability of large data volumes suitable for a broad range of applications such as monitoring climate change. Yet, processing, retrieving, and mining large data are challenging. Usually, content-based remote sensing image (RS) retrieval approaches rely on a query image to retrieve relevant images from the dataset. To increase the flexibility of the retrieval experience, cross-modal representations based on text–image pairs are gaining popularity. Indeed, combining text and image domains is regarded as one of the next frontiers in RS image retrieval. Yet, aligning text to the content of RS images is particularly challenging due to the visual-sematic discrepancy between language and vision worlds. In this work, we propose different architectures based on vision and language transformers for text-to-image and image-to-text retrieval. Extensive experimental results on four different datasets, namely TextRS, Merced, Sydney, and RSICD datasets are reported and discussed.
Databáze: Directory of Open Access Journals