Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Ding, Runyu"'
Autor:
Yang, Shiqi, Liu, Minghuan, Qin, Yuzhe, Ding, Runyu, Li, Jialong, Cheng, Xuxin, Yang, Ruihan, Yi, Sha, Wang, Xiaolong
Learning from demonstrations has shown to be an effective approach to robotic manipulation, especially with the recently collected large-scale robot data with teleoperation systems. Building an efficient teleoperation system across diverse robot plat
Externí odkaz:
http://arxiv.org/abs/2408.11805
Autor:
Ding, Runyu, Qin, Yuzhe, Zhu, Jiyue, Jia, Chengzhe, Yang, Shiqi, Yang, Ruihan, Qi, Xiaojuan, Wang, Xiaolong
Teleoperation is a crucial tool for collecting human demonstrations, but controlling robots with bimanual dexterous hands remains a challenge. Existing teleoperation systems struggle to handle the complexity of coordinating two hands for intricate ma
Externí odkaz:
http://arxiv.org/abs/2407.03162
Rapid advancements in 3D vision-language (3D-VL) tasks have opened up new avenues for human interaction with embodied agents or robots using natural language. Despite this progress, we find a notable limitation: existing 3D-VL models exhibit sensitiv
Externí odkaz:
http://arxiv.org/abs/2403.14760
There is a sensory gulf between the Earth that humans inhabit and the digital realms in which modern AI agents are created. To develop AI agents that can sense, think, and act as flexibly as humans in real-world settings, it is imperative to bridge t
Externí odkaz:
http://arxiv.org/abs/2402.03310
Open-world instance-level scene understanding aims to locate and recognize unseen object categories that are not present in the annotated dataset. This task is challenging because the model needs to both localize novel 3D objects and infer their sema
Externí odkaz:
http://arxiv.org/abs/2308.00353
We propose a lightweight and scalable Regional Point-Language Contrastive learning framework, namely \textbf{RegionPLC}, for open-world 3D scene understanding, aiming to identify and recognize open-set objects and categories. Specifically, based on o
Externí odkaz:
http://arxiv.org/abs/2304.00962
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocab
Externí odkaz:
http://arxiv.org/abs/2211.16312
Despite substantial progress in 3D object detection, advanced 3D detectors often suffer from heavy computation overheads. To this end, we explore the potential of knowledge distillation (KD) for developing efficient 3D object detectors, focusing on p
Externí odkaz:
http://arxiv.org/abs/2205.15156
Deep learning approaches achieve prominent success in 3D semantic segmentation. However, collecting densely annotated real-world 3D datasets is extremely time-consuming and expensive. Training models on synthetic data and generalizing on real-world s
Externí odkaz:
http://arxiv.org/abs/2204.01599
Autor:
Chen, Pengfei, Wei, Jinhua, Ding, Runyu, Chen, Mingjian, Zhao, Diming, Li, Haochao, Chen, Liang, Sun, Xiaogang, Qian, Xiangyang, Pu, Jundong, Chen, Zujun, Wang, Liqing
Publikováno v:
In International Journal of Cardiology 15 November 2024 415