Zobrazeno 1 - 10
of 31
pro vyhledávání: '"Cong, Peishan"'
Autor:
Cong, Peishan, Wang, Ziyi, Dou, Zhiyang, Ren, Yiming, Yin, Wei, Cheng, Kai, Sun, Yujing, Long, Xiaoxiao, Zhu, Xinge, Ma, Yuexin
Language-guided scene-aware human motion generation has great significance for entertainment and robotics. In response to the limitations of existing datasets, we introduce LaserHuman, a pioneering dataset engineered to revolutionize Scene-Text-to-Mo
Externí odkaz:
http://arxiv.org/abs/2403.13307
Autor:
Xu, Yiteng, Cong, Peishan, Yao, Yichen, Chen, Runnan, Hou, Yuenan, Zhu, Xinge, He, Xuming, Yu, Jingyi, Ma, Yuexin
Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc. In this p
Externí odkaz:
http://arxiv.org/abs/2307.14392
Autor:
Lin, Zhenxiang, Peng, Xidong, Cong, Peishan, Zheng, Ge, Sun, Yujin, Hou, Yuenan, Zhu, Xinge, Yang, Sibei, Ma, Yuexin
We introduce the task of 3D visual grounding in large-scale dynamic scenes based on natural linguistic descriptions and online captured multi-modal visual data, including 2D images and 3D LiDAR point clouds. We present a novel method, dubbed WildRefe
Externí odkaz:
http://arxiv.org/abs/2304.05645
Autor:
Cong, Peishan, Xu, Yiteng, Ren, Yiming, Zhang, Juze, Xu, Lan, Wang, Jingya, Yu, Jingyi, Ma, Yuexin
Depth estimation is usually ill-posed and ambiguous for monocular camera-based 3D multi-person pose estimation. Since LiDAR can capture accurate depth information in long-range scenes, it can benefit both the global localization of individuals and th
Externí odkaz:
http://arxiv.org/abs/2211.16951
Human gait recognition is crucial in multimedia, enabling identification through walking patterns without direct interaction, enhancing the integration across various media forms in real-world applications like smart homes, healthcare and non-intrusi
Externí odkaz:
http://arxiv.org/abs/2211.12371
Autor:
Ren, Yiming, Zhao, Chengfeng, He, Yannan, Cong, Peishan, Liang, Han, Yu, Jingyi, Xu, Lan, Ma, Yuexin
Publikováno v:
IEEE Transactions on Visualization and Computer Graphics ( Volume: 29, Issue: 5, May 2023)
We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn ligh
Externí odkaz:
http://arxiv.org/abs/2205.15410
Autor:
Cong, Peishan, Zhu, Xinge, Qiao, Feng, Ren, Yiming, Peng, Xidong, Hou, Yuenan, Xu, Lan, Yang, Ruigang, Manocha, Dinesh, Ma, Yuexin
Accurately detecting and tracking pedestrians in 3D space is challenging due to large variations in rotations, poses and scales. The situation becomes even worse for dense crowds with severe occlusions. However, existing benchmarks either only provid
Externí odkaz:
http://arxiv.org/abs/2204.01026
Real scans always miss partial geometries of objects due to the self-occlusions, external-occlusions, and limited sensor resolutions. Point cloud completion aims to refer the complete shapes for incomplete 3D scans of objects. Current deep learning-b
Externí odkaz:
http://arxiv.org/abs/2203.10569
A thorough and holistic scene understanding is crucial for autonomous vehicles, where LiDAR semantic segmentation plays an indispensable role. However, most existing methods focus on the network design while neglecting the inherent difficulty, imbala
Externí odkaz:
http://arxiv.org/abs/2103.14269
Autor:
Sun, Dongkai1 (AUTHOR), Cong, Peishan2 (AUTHOR), Guan, Fengju1 (AUTHOR), Liu, Shuai1 (AUTHOR), Sun, Lijiang1 (AUTHOR), Zhang, Guiming1 (AUTHOR)
Publikováno v:
Disease Markers. 9/16/2021, p1-7. 7p.