Zobrazeno 1 - 10
of 125
pro vyhledávání: '"Cui, Zhaopeng"'
Autor:
Shen, Yichen, Li, Yijin, Chen, Shuo, Li, Guanglin, Huang, Zhaoyang, Bao, Hujun, Cui, Zhaopeng, Zhang, Guofeng
Feature tracking is crucial for, structure from motion (SFM), simultaneous localization and mapping (SLAM), object tracking and various computer vision tasks. Event cameras, known for their high temporal resolution and ability to capture asynchronous
Externí odkaz:
http://arxiv.org/abs/2409.17981
Autor:
Zhai, Hongjia, Zhang, Xiyu, Zhao, Boming, Li, Hai, He, Yijia, Cui, Zhaopeng, Bao, Hujun, Zhang, Guofeng
Visual localization plays an important role in the applications of Augmented Reality (AR), which enable AR devices to obtain their 6-DoF pose in the pre-build map in order to render virtual content in real scenes. However, most existing approaches ca
Externí odkaz:
http://arxiv.org/abs/2409.14067
Autor:
Dang, Ziqiang, Fan, Tianxing, Zhao, Boming, Shen, Xujie, Wang, Lei, Zhang, Guofeng, Cui, Zhaopeng
Incorporating temporal information effectively is important for accurate 3D human motion estimation and generation which have wide applications from human-computer interaction to AR/VR. In this paper, we present MoManifold, a novel human motion prior
Externí odkaz:
http://arxiv.org/abs/2409.00736
Autor:
Zhao, Boming, Li, Yuan, Sun, Ziyu, Zeng, Lin, Shen, Yujun, Ma, Rui, Zhang, Yinda, Bao, Hujun, Cui, Zhaopeng
Forecasting future scenarios in dynamic environments is essential for intelligent decision-making and navigation, a challenge yet to be fully realized in computer vision and robotics. Traditional approaches like video prediction and novel-view synthe
Externí odkaz:
http://arxiv.org/abs/2405.19745
Autor:
Dong, Wenqi, Yang, Bangbang, Ma, Lin, Liu, Xiao, Cui, Liyuan, Bao, Hujun, Ma, Yuewen, Cui, Zhaopeng
As humans, we aspire to create media content that is both freely willed and readily controlled. Thanks to the prominent development of generative techniques, we now can easily utilize 2D diffusion methods to synthesize images controlled by raw sketch
Externí odkaz:
http://arxiv.org/abs/2405.08054
Autor:
Bao, Chong, Zhang, Yinda, Li, Yuan, Zhang, Xiyu, Yang, Bangbang, Bao, Hujun, Pollefeys, Marc, Zhang, Guofeng, Cui, Zhaopeng
Recently, we have witnessed the explosive growth of various volumetric representations in modeling animatable head avatars. However, due to the diversity of frameworks, there is no practical method to support high-level applications like 3D head avat
Externí odkaz:
http://arxiv.org/abs/2404.02152
Autor:
Hu, Jiarui, Chen, Xianhao, Feng, Boyin, Li, Guanglin, Yang, Liangjing, Bao, Hujun, Zhang, Guofeng, Cui, Zhaopeng
Recently neural radiance fields (NeRF) have been widely exploited as 3D representations for dense simultaneous localization and mapping (SLAM). Despite their notable successes in surface modeling and novel view synthesis, existing NeRF-based methods
Externí odkaz:
http://arxiv.org/abs/2403.16095
Publikováno v:
CVPR 2024
Directly generating scenes from satellite imagery offers exciting possibilities for integration into applications like games and map services. However, challenges arise from significant view changes and scene scale. Previous efforts mainly focused on
Externí odkaz:
http://arxiv.org/abs/2401.10786
Due to the ability to synthesize high-quality novel views, Neural Radiance Fields (NeRF) have been recently exploited to improve visual localization in a known environment. However, the existing methods mostly utilize NeRFs for data augmentation to i
Externí odkaz:
http://arxiv.org/abs/2312.10649
This paper presents a collaborative implicit neural simultaneous localization and mapping (SLAM) system with RGB-D image sequences, which consists of complete front-end and back-end modules including odometry, loop detection, sub-map fusion, and glob
Externí odkaz:
http://arxiv.org/abs/2311.08013