Zobrazeno 1 - 10
of 52
pro vyhledávání: '"Xu, Haohang"'
Autor:
Ponosov, Yuri S., Komleva, Evgenia V., Pankrushina, Elizaveta A., Xu, Haohang, Sui, Yu, Streltsov, Sergey V.
The results of polarization-dependent Raman spectroscopy of single-crystalline LiVO$_2$ exhibiting transition to a diamagnetic state below $T_c \sim $500K are reported. Our measurements clearly detect additional peaks in the low-temperature phase, wh
Externí odkaz:
http://arxiv.org/abs/2407.08426
Betrayed by Attention: A Simple yet Effective Approach for Self-supervised Video Object Segmentation
In this paper, we propose a simple yet effective approach for self-supervised video object segmentation (VOS). Our key insight is that the inherent structural dependencies present in DINO-pretrained Transformers can be leveraged to establish robust s
Externí odkaz:
http://arxiv.org/abs/2311.17893
Deep neural networks are capable of learning powerful representations to tackle complex vision tasks but expose undesirable properties like the over-fitting issue. To this end, regularization techniques like image augmentation are necessary for deep
Externí odkaz:
http://arxiv.org/abs/2206.04846
Autor:
Tian, Yunjie, Xie, Lingxi, Zhang, Xiaopeng, Fang, Jiemin, Xu, Haohang, Huang, Wei, Jiao, Jianbin, Tian, Qi, Ye, Qixiang
In this paper, we propose a self-supervised visual representation learning approach which involves both generative and discriminative proxies, where we focus on the former part by requiring the target network to recover the original image based on th
Externí odkaz:
http://arxiv.org/abs/2111.13163
Autor:
Ding, Shuangrui, Li, Maomao, Yang, Tianyu, Qian, Rui, Xu, Haohang, Chen, Qingyi, Wang, Jue, Xiong, Hongkai
In light of the success of contrastive learning in the image domain, current self-supervised video representation learning methods usually employ contrastive loss to facilitate video representation learning. When naively pulling two augmented views o
Externí odkaz:
http://arxiv.org/abs/2109.15130
Autor:
Xu, Haohang, Fang, Jiemin, Zhang, Xiaopeng, Xie, Lingxi, Wang, Xinggang, Dai, Wenrui, Xiong, Hongkai, Tian, Qi
Recent advances in self-supervised learning have experienced remarkable progress, especially for contrastive learning based methods, which regard each image as well as its augmentations as an individual class and try to distinguish them from all othe
Externí odkaz:
http://arxiv.org/abs/2107.01691
Collecting annotated data for semantic segmentation is time-consuming and hard to scale up. In this paper, we for the first time propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of
Externí odkaz:
http://arxiv.org/abs/2106.04121
Semi-supervised learning acts as an effective way to leverage massive unlabeled data. In this paper, we propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL), which combines the well-known contrastive loss in self-s
Externí odkaz:
http://arxiv.org/abs/2105.07387
Self-supervised learning based on instance discrimination has shown remarkable progress. In particular, contrastive learning, which regards each image as well as its augmentations as an individual class and tries to distinguish them from all other im
Externí odkaz:
http://arxiv.org/abs/2012.02733
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
In this paper, we propose the $K$-Shot Contrastive Learning (KSCL) of visual features by applying multiple augmentations to investigate the sample variations within individual instances. It aims to combine the advantages of inter-instance discriminat
Externí odkaz:
http://arxiv.org/abs/2007.13310