Zobrazeno 1 - 10
of 454
pro vyhledávání: '"Guo, YuLan"'
Data augmentation (DA) is an effective approach for enhancing model performance with limited data, such as light field (LF) image super-resolution (SR). LF images inherently possess rich spatial and angular information. Nonetheless, there is a scarci
Externí odkaz:
http://arxiv.org/abs/2410.06478
Autor:
Wang, Longguang, Guo, Yulan, Li, Juncheng, Liu, Hongda, Zhao, Yang, Wang, Yingqian, Jin, Zhi, Gu, Shuhang, Timofte, Radu
This paper summarizes the 3rd NTIRE challenge on stereo image super-resolution (SR) with a focus on new solutions and results. The task of this challenge is to super-resolve a low-resolution stereo image pair to a high-resolution one with a magnifica
Externí odkaz:
http://arxiv.org/abs/2409.16947
To further promote the development of multimodal point cloud completion, we contribute a large-scale multimodal point cloud completion benchmark ModelNet-MPC with richer shape categories and more diverse test data, which contains nearly 400,000 pairs
Externí odkaz:
http://arxiv.org/abs/2407.07374
The performance of image super-resolution relies heavily on the accuracy of degradation information, especially under blind settings. Due to absence of true degradation models in real-world scenarios, previous methods learn distinct representations b
Externí odkaz:
http://arxiv.org/abs/2407.01299
Autor:
Min, Chen, Zhao, Dawei, Xiao, Liang, Zhao, Jian, Xu, Xinli, Zhu, Zheng, Jin, Lei, Li, Jianshu, Guo, Yulan, Xing, Junliang, Jing, Liping, Nie, Yiming, Dai, Bin
Vision-centric autonomous driving has recently raised wide attention due to its lower cost. Pre-training is essential for extracting a universal representation. However, current vision-centric pre-training typically relies on either 2D or 3D pre-text
Externí odkaz:
http://arxiv.org/abs/2405.04390
3D synthetic-to-real unsupervised domain adaptive segmentation is crucial to annotating new domains. Self-training is a competitive approach for this task, but its performance is limited by different sensor sampling patterns (i.e., variations in poin
Externí odkaz:
http://arxiv.org/abs/2403.18469
Autor:
Cong, Runmin, Sheng, Ronghui, Wu, Hao, Guo, Yulan, Wei, Yunchao, Zuo, Wangmeng, Zhao, Yao, Kwong, Sam
Color information is the most commonly used prior knowledge for depth map super-resolution (DSR), which can provide high-frequency boundary guidance for detail restoration. However, its role and functionality in DSR have not been fully developed. In
Externí odkaz:
http://arxiv.org/abs/2403.07290
Autor:
Chen, Minglin, Yuan, Weihao, Wang, Yukun, Sheng, Zhe, He, Yisheng, Dong, Zilong, Bo, Liefeng, Guo, Yulan
Recently, text-to-3D approaches have achieved high-fidelity 3D content generation using text description. However, the generated objects are stochastic and lack fine-grained control. Sketches provide a cheap approach to introduce such fine-grained co
Externí odkaz:
http://arxiv.org/abs/2401.14257
Autor:
Li, Haopeng, Deng, Andong, Ke, Qiuhong, Liu, Jun, Rahmani, Hossein, Guo, Yulan, Schiele, Bernt, Chen, Chen
Reasoning over sports videos for question answering is an important task with numerous applications, such as player training and information retrieval. However, this task has not been explored due to the lack of relevant datasets and the challenging
Externí odkaz:
http://arxiv.org/abs/2401.01505
We propose a unified point cloud video self-supervised learning framework for object-centric and scene-centric data. Previous methods commonly conduct representation learning at the clip or frame level and cannot well capture fine-grained semantics.
Externí odkaz:
http://arxiv.org/abs/2308.09247