Zobrazeno 1 - 10
of 399
pro vyhledávání: '"Yang, Ruigang"'
Autor:
Wang, Dingrui, Lai, Zheyuan, Li, Yuda, Wu, Yi, Ma, Yuexin, Betz, Johannes, Yang, Ruigang, Li, Wei
Emergent-scene safety is the key milestone for fully autonomous driving, and reliable on-time prediction is essential to maintain safety in emergency scenarios. However, these emergency scenarios are long-tailed and hard to collect, which restricts t
Externí odkaz:
http://arxiv.org/abs/2405.04100
Autor:
Yin, Junbo, Shen, Jianbing, Chen, Runnan, Li, Wei, Yang, Ruigang, Frossard, Pascal, Wang, Wenguan
Bird's eye view (BEV) representation has emerged as a dominant solution for describing 3D space in autonomous driving scenarios. However, objects in the BEV representation typically exhibit small sizes, and the associated point cloud context is inher
Externí odkaz:
http://arxiv.org/abs/2403.15241
Vehicle-to-Everything (V2X) collaborative perception has recently gained significant attention due to its capability to enhance scene understanding by integrating information from various agents, e.g., vehicles, and infrastructure. However, current w
Externí odkaz:
http://arxiv.org/abs/2312.15742
Autor:
Chen, Runnan, Zhu, Xinge, Chen, Nenglun, Wang, Dawei, Li, Wei, Ma, Yuexin, Yang, Ruigang, Liu, Tongliang, Wang, Wenping
Current successful methods of 3D scene perception rely on the large-scale annotated point cloud, which is tedious and expensive to acquire. In this paper, we propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer
Externí odkaz:
http://arxiv.org/abs/2309.16956
This paper addresses the problem of 3D referring expression comprehension (REC) in autonomous driving scenario, which aims to ground a natural language to the targeted region in LiDAR point clouds. Previous approaches for REC usually focus on the 2D
Externí odkaz:
http://arxiv.org/abs/2305.15765
Image instance segmentation is a fundamental research topic in autonomous driving, which is crucial for scene understanding and road safety. Advanced learning-based approaches often rely on the costly 2D mask annotations for training. In this paper,
Externí odkaz:
http://arxiv.org/abs/2212.03504
LiDAR-based 3D object detection is an indispensable task in advanced autonomous driving systems. Though impressive detection results have been achieved by superior 3D detectors, they suffer from significant performance degeneration when facing unseen
Externí odkaz:
http://arxiv.org/abs/2212.02845
3D object detection received increasing attention in autonomous driving recently. Objects in 3D scenes are distributed with diverse orientations. Ordinary detectors do not explicitly model the variations of rotation and reflection transformations. Co
Externí odkaz:
http://arxiv.org/abs/2211.11962
We investigate transductive zero-shot point cloud semantic segmentation, where the network is trained on seen objects and able to segment unseen objects. The 3D geometric elements are essential cues to imply a novel 3D object type. However, previous
Externí odkaz:
http://arxiv.org/abs/2210.09923
Autor:
Ma, Yuexin, Wang, Tai, Bai, Xuyang, Yang, Huitong, Hou, Yuenan, Wang, Yaming, Qiao, Yu, Yang, Ruigang, Manocha, Dinesh, Zhu, Xinge
In recent years, vision-centric Bird's Eye View (BEV) perception has garnered significant interest from both industry and academia due to its inherent advantages, such as providing an intuitive representation of the world and being conducive to data
Externí odkaz:
http://arxiv.org/abs/2208.02797