Using scale-equivariant CNN to enhance scale robustness in feature matching.

Autor: Liao, Yun, Liu, Peiyu, Wu, Xuning, Pan, Zhixuan, Zhu, Kaijun, Zhou, Hao, Liu, Junhui, Duan, Qing
Předmět:
Zdroj: Visual Computer; Oct2024, Vol. 40 Issue 10, p7307-7322, 16p
Abstrakt: Image matching is an important task in computer vision. The detector-free dense matching method is an important research direction of image matching due to its high accuracy and robustness. The classical detector-free image matching methods utilize convolutional neural networks to extract features and then match them. Due to the lack of scale equivariance in CNNs, this method often exhibits poor matching performance when the images to be matched undergo significant scale variations. However, large-scale variations are very common in practical problems. To solve the above problem, we propose SeLFM, a method that combines scale equivariance and the global modeling capability of transformer. The two main advantages of this method are scale-equivariant CNNs can extract scale-equivariant features, while transformer also brings global modeling capability. Experiments prove that this modification improves the performance of the matcher in matching image pairs with large-scale variations and does not affect the general matching performance of the matcher. The code will be open-sourced at this link: https://github.com/LiaoYun0x0/SeLFM/tree/main [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index