Zobrazeno 1 - 10
of 368
pro vyhledávání: '"Lu, Xiankai"'
Autor:
Moskalenko, Andrey, Bryncev, Alexey, Vatolin, Dmitry, Timofte, Radu, Zhan, Gen, Yang, Li, Tang, Yunlong, Liao, Yiting, Lin, Jiongzhi, Huang, Baitao, Moradi, Morteza, Moradi, Mohammad, Rundo, Francesco, Spampinato, Concetto, Borji, Ali, Palazzo, Simone, Zhu, Yuxin, Sun, Yinan, Duan, Huiyu, Cao, Yuqin, Jia, Ziheng, Hu, Qiang, Min, Xiongkuo, Zhai, Guangtao, Fang, Hao, Cong, Runmin, Lu, Xiankai, Zhou, Xiaofei, Zhang, Wei, Zhao, Chunyu, Mu, Wentao, Deng, Tao, Tavakoli, Hamed R.
This paper reviews the Challenge on Video Saliency Prediction at AIM 2024. The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences. Saliency maps are widely exploited in vario
Externí odkaz:
http://arxiv.org/abs/2409.14827
Autor:
Ding, Henghui, Hong, Lingyi, Liu, Chang, Xu, Ning, Yang, Linjie, Fan, Yuchen, Miao, Deshui, Gu, Yameng, Li, Xin, He, Zhenyu, Wang, Yaowei, Yang, Ming-Hsuan, Chai, Jinming, Ma, Qin, Zhang, Junpei, Jiao, Licheng, Liu, Fang, Liu, Xinyu, Zhang, Jing, Zhang, Kexin, Liu, Xu, Li, LingLing, Fang, Hao, Pan, Feiyu, Lu, Xiankai, Zhang, Wei, Cong, Runmin, Tran, Tuyen, Cao, Bin, Zhang, Yisi, Wang, Hanyi, He, Xingjian, Liu, Jing
Despite the promising performance of current video segmentation models on existing benchmarks, these models still struggle with complex scenes. In this paper, we introduce the 6th Large-scale Video Object Segmentation (LSVOS) challenge in conjunction
Externí odkaz:
http://arxiv.org/abs/2409.05847
Referring video object segmentation (RVOS) relies on natural language expressions to segment target objects in video. In this year, LSVOS Challenge RVOS Track replaced the origin YouTube-RVOS benchmark with MeViS. MeViS focuses on referring the targe
Externí odkaz:
http://arxiv.org/abs/2408.10129
Video Object Segmentation (VOS) task aims to segmenting a particular object instance throughout the entire video sequence given only the object mask of the first frame. Recently, Segment Anything Model 2 (SAM 2) is proposed, which is a foundation mod
Externí odkaz:
http://arxiv.org/abs/2408.10125
Open-Vocabulary Video Instance Segmentation (VIS) is attracting increasing attention due to its ability to segment and track arbitrary objects. However, the recent Open-Vocabulary VIS attempts obtained unsatisfactory results, especially in terms of g
Externí odkaz:
http://arxiv.org/abs/2407.07427
Autor:
Ding, Henghui, Liu, Chang, Wei, Yunchao, Ravi, Nikhila, He, Shuting, Bai, Song, Torr, Philip, Miao, Deshui, Li, Xin, He, Zhenyu, Wang, Yaowei, Yang, Ming-Hsuan, Xu, Zhensong, Yao, Jiangtao, Wu, Chengjing, Liu, Ting, Liu, Luoqi, Liu, Xinyu, Zhang, Jing, Zhang, Kexin, Yang, Yuting, Jiao, Licheng, Yang, Shuyuan, Gao, Mingqi, Luo, Jingnan, Yang, Jinyu, Han, Jungong, Zheng, Feng, Cao, Bin, Zhang, Yisi, Lin, Xuanxu, He, Xingjian, Zhao, Bo, Liu, Jing, Pan, Feiyu, Fang, Hao, Lu, Xiankai
Pixel-level Video Understanding in the Wild Challenge (PVUW) focus on complex video understanding. In this CVPR 2024 workshop, we add two new tracks, Complex Video Object Segmentation Track based on MOSE dataset and Motion Expression guided Video Seg
Externí odkaz:
http://arxiv.org/abs/2406.17005
Referring video object segmentation (RVOS) relies on natural language expressions to segment target objects in video, emphasizing modeling dense text-video relations. The current RVOS methods typically use independently pre-trained vision and languag
Externí odkaz:
http://arxiv.org/abs/2406.04842
Autor:
Bao, Liuxin, Zhou, Xiaofei, Lu, Xiankai, Sun, Yaoqi, Yin, Haibing, Hu, Zhenghui, Zhang, Jiyong, Yan, Chenggang
Depth images and thermal images contain the spatial geometry information and surface temperature information, which can act as complementary information for the RGB modality. However, the quality of the depth and thermal images is often unreliable in
Externí odkaz:
http://arxiv.org/abs/2405.07655
Publikováno v:
European Conference on Computer Vision 2022
Sample selection is an effective strategy to mitigate the effect of label noise in robust learning. Typical strategies commonly apply the small-loss criterion to identify clean samples. However, those samples lying around the decision boundary with l
Externí odkaz:
http://arxiv.org/abs/2208.11351
Autor:
Guan, Qingfeng, Fang, Hao, Han, Chenchen, Wang, Zhicheng, Zhang, Ruiheng, Zhang, Yitian, Lu, Xiankai
Publikováno v:
In Neurocomputing 1 September 2024 596