Zobrazeno 1 - 10
of 33
pro vyhledávání: '"Zhan, Huangying"'
Autor:
Feng, Ziyue, Zhan, Huangying, Chen, Zheng, Yan, Qingan, Xu, Xiangyu, Cai, Changjiang, Li, Bing, Zhu, Qilun, Xu, Yi
We present NARUTO, a neural active reconstruction system that combines a hybrid neural representation with uncertainty learning, enabling high-fidelity surface reconstruction. Our approach leverages a multi-resolution hash-grid as the mapping backbon
Externí odkaz:
http://arxiv.org/abs/2402.18771
Autor:
Chen, Zheng, Yan, Qingan, Zhan, Huangying, Cai, Changjiang, Xu, Xiangyu, Huang, Yuzhong, Wang, Weihan, Feng, Ziyue, Liu, Lantao, Xu, Yi
Identifying spatially complete planar primitives from visual data is a crucial task in computer vision. Prior methods are largely restricted to either 2D segment recovery or simplifying 3D structures, even with extensive plane annotations. We present
Externí odkaz:
http://arxiv.org/abs/2401.00871
Autor:
Xu, Xiangyu, Chen, Lichang, Cai, Changjiang, Zhan, Huangying, Yan, Qingan, Ji, Pan, Yuan, Junsong, Huang, Heng, Xu, Yi
Direct optimization of interpolated features on multi-resolution voxel grids has emerged as a more efficient alternative to MLP-like modules. However, this approach is constrained by higher memory expenses and limited representation capabilities. In
Externí odkaz:
http://arxiv.org/abs/2304.06178
A high-quality 3D reconstruction of a scene from a collection of 2D images can be achieved through offline/online mapping methods. In this paper, we explore active mapping from the perspective of implicit representations, which have recently produced
Externí odkaz:
http://arxiv.org/abs/2211.12656
We propose a robotic learning system for autonomous exploration and navigation in unexplored environments. We are motivated by the idea that even an unseen environment may be familiar from previous experiences in similar environments. The core of our
Externí odkaz:
http://arxiv.org/abs/2211.12649
Autor:
Han, Junlin, Zhan, Huangying, Hong, Jie, Fang, Pengfei, Li, Hongdong, Petersson, Lars, Reid, Ian
This paper studies the problem of measuring and predicting how memorable an image is to pattern recognition machines, as a path to explore machine intelligence. Firstly, we propose a self-supervised machine memory quantification pipeline, dubbed ``Ma
Externí odkaz:
http://arxiv.org/abs/2211.07625
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, exis
Externí odkaz:
http://arxiv.org/abs/2211.03660
Autor:
Bian, Jia-Wang, Zhan, Huangying, Wang, Naiyan, Li, Zhichao, Zhang, Le, Shen, Chunhua, Cheng, Ming-Ming, Reid, Ian
Publikováno v:
International Journal of Computer Vision, 2021
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training and enables the scale-consistent prediction at inference time. Our contributions include: (i) we propose a geometry consistency loss, which penalizes
Externí odkaz:
http://arxiv.org/abs/2105.11610
Multi-view geometry-based methods dominate the last few decades in monocular Visual Odometry for their superior performance, while they have been vulnerable to dynamic and low-texture scenes. More importantly, monocular methods suffer from scale-drif
Externí odkaz:
http://arxiv.org/abs/2103.00933
Single-View depth estimation using the CNNs trained from unlabelled videos has shown significant promise. However, excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particula
Externí odkaz:
http://arxiv.org/abs/2006.02708