Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Khansari, Mohi"'
Autor:
Sorokin, Maks, Fu, Chuyuan, Tan, Jie, Liu, C. Karen, Bai, Yunfei, Lu, Wenlong, Ha, Sehoon, Khansari, Mohi
As robots become more prevalent, optimizing their design for better performance and efficiency is becoming increasingly important. However, current robot design practices overlook the impact of perception and design choices on a robot's learning capa
Externí odkaz:
http://arxiv.org/abs/2303.13390
Recent progress in end-to-end Imitation Learning approaches has shown promising results and generalization capabilities on mobile manipulation tasks. Such models are seeing increasing deployment in real-world settings, where scaling up requires robot
Externí odkaz:
http://arxiv.org/abs/2302.04334
Learning from demonstration is a proven technique to teach robots new skills. Data quality and quantity play a critical role in the performance of models trained using data collected from human demonstrations. In this paper we enhance an existing tel
Externí odkaz:
http://arxiv.org/abs/2211.03020
In this work we investigate and demonstrate benefits of a Bayesian approach to imitation learning from multiple sensor inputs, as applied to the task of opening office doors with a mobile manipulator. Augmenting policies with additional sensor inputs
Externí odkaz:
http://arxiv.org/abs/2202.07600
Autor:
Jang, Eric, Irpan, Alex, Khansari, Mohi, Kappler, Daniel, Ebert, Frederik, Lynch, Corey, Levine, Sergey, Finn, Chelsea
Publikováno v:
Conference on Robot Learning (pp. 991-1002). 2022 Jan 11
In this paper, we study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks, a long-standing challenge in robot learning. We approach the challenge from an imitation learning perspective, aiming to study ho
Externí odkaz:
http://arxiv.org/abs/2202.02005
Autor:
Khansari, Mohi, Ho, Daniel, Du, Yuqing, Fuentes, Armando, Bennice, Matthew, Sievers, Nicolas, Kirmani, Sean, Bai, Yunfei, Jang, Eric
Recent work in visual end-to-end learning for robotics has shown the promise of imitation learning across a variety of tasks. Such approaches are expensive both because they require large amounts of real world training demonstrations and because iden
Externí odkaz:
http://arxiv.org/abs/2202.01862
Autor:
Lu, Yao, Hausman, Karol, Chebotar, Yevgen, Yan, Mengyuan, Jang, Eric, Herzog, Alexander, Xiao, Ted, Irpan, Alex, Khansari, Mohi, Kalashnikov, Dmitry, Levine, Sergey
Robotic skills can be learned via imitation learning (IL) using user-provided demonstrations, or via reinforcement learning (RL) using large amountsof autonomously collected experience.Both methods have complementarystrengths and weaknesses: RL can r
Externí odkaz:
http://arxiv.org/abs/2111.05424
The success of deep reinforcement learning (RL) and imitation learning (IL) in vision-based robotic manipulation typically hinges on the expense of large scale data collection. With simulation, data to train a policy can be collected efficiently at s
Externí odkaz:
http://arxiv.org/abs/2011.03148
Deep neural network based reinforcement learning (RL) can learn appropriate visual representations for complex tasks like vision-based robotic grasping without the need for manually engineering or prior learning a perception system. However, data for
Externí odkaz:
http://arxiv.org/abs/2006.09001
Complex object manipulation tasks often span over long sequences of operations. Task planning over long-time horizons is a challenging and open problem in robotics, and its complexity grows exponentially with an increasing number of subtasks. In this
Externí odkaz:
http://arxiv.org/abs/2006.04843