Zobrazeno 1 - 10
of 46
pro vyhledávání: '"Yin, Junbo"'
Concurrent processing of multiple autonomous driving 3D perception tasks within the same spatiotemporal scene poses a significant challenge, in particular due to the computational inefficiencies and feature competition between tasks when using tradit
Externí odkaz:
http://arxiv.org/abs/2407.10876
Autor:
Yin, Junbo, Shen, Jianbing, Chen, Runnan, Li, Wei, Yang, Ruigang, Frossard, Pascal, Wang, Wenguan
Bird's eye view (BEV) representation has emerged as a dominant solution for describing 3D space in autonomous driving scenarios. However, objects in the BEV representation typically exhibit small sizes, and the associated point cloud context is inher
Externí odkaz:
http://arxiv.org/abs/2403.15241
Vehicle-to-Everything (V2X) collaborative perception has recently gained significant attention due to its capability to enhance scene understanding by integrating information from various agents, e.g., vehicles, and infrastructure. However, current w
Externí odkaz:
http://arxiv.org/abs/2312.15742
Monocular depth estimation is known as an ill-posed task in which objects in a 2D image usually do not contain sufficient information to predict their depth. Thus, it acts differently from other tasks (e.g., classification and segmentation) in many w
Externí odkaz:
http://arxiv.org/abs/2308.05605
This paper addresses the problem of 3D referring expression comprehension (REC) in autonomous driving scenario, which aims to ground a natural language to the targeted region in LiDAR point clouds. Previous approaches for REC usually focus on the 2D
Externí odkaz:
http://arxiv.org/abs/2305.15765
Image instance segmentation is a fundamental research topic in autonomous driving, which is crucial for scene understanding and road safety. Advanced learning-based approaches often rely on the costly 2D mask annotations for training. In this paper,
Externí odkaz:
http://arxiv.org/abs/2212.03504
LiDAR-based 3D object detection is an indispensable task in advanced autonomous driving systems. Though impressive detection results have been achieved by superior 3D detectors, they suffer from significant performance degeneration when facing unseen
Externí odkaz:
http://arxiv.org/abs/2212.02845
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Previous works for LiDAR-based 3D object detection mainly focus on the single-frame paradigm. In this paper, we propose to detect 3D objects by exploiting temporal information in multiple frames, i.e., the point cloud videos. We empirically categoriz
Externí odkaz:
http://arxiv.org/abs/2207.12659
Autor:
Yin, Junbo, Fang, Jin, Zhou, Dingfu, Zhang, Liangjun, Xu, Cheng-Zhong, Shen, Jianbing, Wang, Wenguan
Dominated point cloud-based 3D object detectors in autonomous driving scenarios rely heavily on the huge amount of accurately labeled samples, however, 3D annotation in the point cloud is extremely tedious, expensive and time-consuming. To reduce the
Externí odkaz:
http://arxiv.org/abs/2207.12655
Autor:
Yin, Junbo, Zhou, Dingfu, Zhang, Liangjun, Fang, Jin, Xu, Cheng-Zhong, Shen, Jianbing, Wang, Wenguan
Existing approaches for unsupervised point cloud pre-training are constrained to either scene-level or point/voxel-level instance discrimination. Scene-level methods tend to lose local details that are crucial for recognizing the road objects, while
Externí odkaz:
http://arxiv.org/abs/2207.12654