Improving Pixel-Level Contrastive Learning by Leveraging Exogenous Depth Information
Autor: | Saad, Ahmed Ben, Prokopetc, Kristina, Kherroubi, Josselin, Davy, Axel, Courtois, Adrien, Facciolo, Gabriele |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). |
DOI: | 10.1109/wacv56688.2023.00241 |
Popis: | Self-supervised representation learning based on Contrastive Learning (CL) has been the subject of much attention in recent years. This is due to the excellent results obtained on a variety of subsequent tasks (in particular classification), without requiring a large amount of labeled samples. However, most reference CL algorithms (such as SimCLR and MoCo, but also BYOL and Barlow Twins) are not adapted to pixel-level downstream tasks. One existing solution known as PixPro proposes a pixel-level approach that is based on filtering of pairs of positive/negative image crops of the same image using the distance between the crops in the whole image. We argue that this idea can be further enhanced by incorporating semantic information provided by exogenous data as an additional selection filter, which can be used (at training time) to improve the selection of the pixel-level positive/negative samples. In this paper we will focus on the depth information, which can be obtained by using a depth estimation network or measured from available data (stereovision, parallax motion, LiDAR, etc.). Scene depth can provide meaningful cues to distinguish pixels belonging to different objects based on their depth. We show that using this exogenous information in the contrastive loss leads to improved results and that the learned representations better follow the shapes of objects. In addition, we introduce a multi-scale loss that alleviates the issue of finding the training parameters adapted to different object sizes. We demonstrate the effectiveness of our ideas on the Breakout Segmentation on Borehole Images where we achieve an improvement of 1.9\% over PixPro and nearly 5\% over the supervised baseline. We further validate our technique on the indoor scene segmentation tasks with ScanNet and outdoor scenes with CityScapes ( 1.6\% and 1.1\% improvement over PixPro respectively). Comment: Accepted for WACV 2023 |
Databáze: | OpenAIRE |
Externí odkaz: |