Zobrazeno 1 - 10
of 379
pro vyhledávání: '"Wang Lirui"'
Autor:
Wang, Yanwei, Wang, Lirui, Du, Yilun, Sundaralingam, Balakumar, Yang, Xuning, Chao, Yu-Wei, Perez-D'Arpino, Claudia, Fox, Dieter, Shah, Julie
Generative policies trained with human demonstrations can autonomously accomplish multimodal, long-horizon tasks. However, during inference, humans are often removed from the policy execution loop, limiting the ability to guide a pre-trained policy t
Externí odkaz:
http://arxiv.org/abs/2411.16627
Autor:
Hua, Pu, Liu, Minghuan, Macaluso, Annabella, Lin, Yunfeng, Zhang, Weinan, Xu, Huazhe, Wang, Lirui
Robotic simulation today remains challenging to scale up due to the human efforts required to create diverse simulation tasks and scenes. Simulation-trained policies also face scalability issues as many sim-to-real methods focus on a single task. To
Externí odkaz:
http://arxiv.org/abs/2410.03645
Publikováno v:
Neurips 2024
One of the roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies
Externí odkaz:
http://arxiv.org/abs/2409.20537
This paper presents T3: Transferable Tactile Transformers, a framework for tactile representation learning that scales across multi-sensors and multi-tasks. T3 is designed to overcome the contemporary issue that camera-based tactile sensing is extrem
Externí odkaz:
http://arxiv.org/abs/2406.13640
Training general robotic policies from heterogeneous data for different tasks is a significant challenge. Existing robotic datasets vary in different modalities such as color, depth, tactile, and proprioceptive information, and collected in different
Externí odkaz:
http://arxiv.org/abs/2402.02511
Autor:
Wang, Lirui, Ling, Yiyang, Yuan, Zhecheng, Shridhar, Mohit, Bao, Chen, Qin, Yuzhe, Wang, Bailin, Xu, Huazhe, Wang, Xiaolong
Publikováno v:
International Conference on Learning Representations (ICLR), 2024
Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data. However, existing methods for data generation have generally focused on scene-leve
Externí odkaz:
http://arxiv.org/abs/2310.01361
Fleets of robots ingest massive amounts of heterogeneous streaming data silos generated by interacting with their environments, far more than what can be stored or transmitted with ease. At the same time, teams of robots should co-acquire diverse ski
Externí odkaz:
http://arxiv.org/abs/2310.01362
Autor:
Zhou, Guangyao, Gothoskar, Nishad, Wang, Lirui, Tenenbaum, Joshua B., Gutfreund, Dan, Lázaro-Gredilla, Miguel, George, Dileep, Mansinghka, Vikash K.
The ability to perceive and understand 3D scenes is crucial for many applications in computer vision and robotics. Inverse graphics is an appealing approach to 3D scene understanding that aims to infer the 3D scene structure from 2D images. In this p
Externí odkaz:
http://arxiv.org/abs/2302.03744
Expert demonstrations are a rich source of supervision for training visual robotic manipulation policies, but imitation learning methods often require either a large number of demonstrations or expensive online expert supervision to learn reactive cl
Externí odkaz:
http://arxiv.org/abs/2301.08556
Decentralized learning has been advocated and widely deployed to make efficient use of distributed datasets, with an extensive focus on supervised learning (SL) problems. Unfortunately, the majority of real-world data are unlabeled and can be highly
Externí odkaz:
http://arxiv.org/abs/2210.10947