Zobrazeno 1 - 10
of 2 274
pro vyhledávání: '"Hausman, P."'
Autor:
Black, Kevin, Brown, Noah, Driess, Danny, Esmail, Adnan, Equi, Michael, Finn, Chelsea, Fusai, Niccolo, Groom, Lachy, Hausman, Karol, Ichter, Brian, Jakubczak, Szymon, Jones, Tim, Ke, Liyiming, Levine, Sergey, Li-Bell, Adrian, Mothukuri, Mohith, Nair, Suraj, Pertsch, Karl, Shi, Lucy Xiaoyang, Tanner, James, Vuong, Quan, Walling, Anna, Wang, Haohuan, Zhilinsky, Ury
Robot learning holds tremendous promise to unlock the full potential of flexible, general, and dexterous robot systems, as well as to address some of the deepest questions in artificial intelligence. However, bringing robot learning to the level of g
Externí odkaz:
http://arxiv.org/abs/2410.24164
Autor:
Burns, Kaylee, Jain, Ajinkya, Go, Keegan, Xia, Fei, Stark, Michael, Schaal, Stefan, Hausman, Karol
Large Language Models (LLMs) have been successful at generating robot policy code, but so far these results have been limited to high-level tasks that do not require precise movement. It is an open question how well such approaches work for tasks tha
Externí odkaz:
http://arxiv.org/abs/2404.06645
Autor:
Sundaresan, Priya, Vuong, Quan, Gu, Jiayuan, Xu, Peng, Xiao, Ted, Kirmani, Sean, Yu, Tianhe, Stark, Michael, Jain, Ajinkya, Hausman, Karol, Sadigh, Dorsa, Bohg, Jeannette, Schaal, Stefan
Natural language and images are commonly used as goal representations in goal-conditioned imitation learning (IL). However, natural language can be ambiguous and images can be over-specified. In this work, we propose hand-drawn sketches as a modality
Externí odkaz:
http://arxiv.org/abs/2403.02709
Autor:
Nasiriany, Soroush, Xia, Fei, Yu, Wenhao, Xiao, Ted, Liang, Jacky, Dasgupta, Ishita, Xie, Annie, Driess, Danny, Wahid, Ayzaan, Xu, Zhuo, Vuong, Quan, Zhang, Tingnan, Lee, Tsang-Wei Edward, Lee, Kuang-Huei, Xu, Peng, Kirmani, Sean, Zhu, Yuke, Zeng, Andy, Hausman, Karol, Heess, Nicolas, Finn, Chelsea, Levine, Sergey, Ichter, Brian
Vision language models (VLMs) have shown impressive capabilities across a variety of tasks, from logical reasoning to visual understanding. This opens the door to richer interaction with the world, for example robotic control. However, VLMs produce o
Externí odkaz:
http://arxiv.org/abs/2402.07872
Autor:
Ahn, Michael, Dwibedi, Debidatta, Finn, Chelsea, Arenas, Montse Gonzalez, Gopalakrishnan, Keerthana, Hausman, Karol, Ichter, Brian, Irpan, Alex, Joshi, Nikhil, Julian, Ryan, Kirmani, Sean, Leal, Isabel, Lee, Edward, Levine, Sergey, Lu, Yao, Maddineni, Sharath, Rao, Kanishka, Sadigh, Dorsa, Sanketi, Pannag, Sermanet, Pierre, Vuong, Quan, Welker, Stefan, Xia, Fei, Xiao, Ted, Xu, Peng, Xu, Steve, Xu, Zhuo
Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is t
Externí odkaz:
http://arxiv.org/abs/2401.12963
Autor:
Firoozi, Roya, Tucker, Johnathan, Tian, Stephen, Majumdar, Anirudha, Sun, Jiankai, Liu, Weiyu, Zhu, Yuke, Song, Shuran, Kapoor, Ashish, Hausman, Karol, Ichter, Brian, Driess, Danny, Wu, Jiajun, Lu, Cewu, Schwager, Mac
We survey applications of pretrained foundation models in robotics. Traditional deep learning models in robotics are trained on small datasets tailored for specific tasks, which limits their adaptability across diverse applications. In contrast, foun
Externí odkaz:
http://arxiv.org/abs/2312.07843
Autor:
Li, Chengshu, Liang, Jacky, Zeng, Andy, Chen, Xinyun, Hausman, Karol, Sadigh, Dorsa, Levine, Sergey, Fei-Fei, Li, Xia, Fei, Ichter, Brian
Code provides a general syntactic structure to build complex programs and perform precise computations when paired with a code interpreter - we hypothesize that language models (LMs) can leverage code-writing to improve Chain of Thought reasoning not
Externí odkaz:
http://arxiv.org/abs/2312.04474
Autor:
Leal, Isabel, Choromanski, Krzysztof, Jain, Deepali, Dubey, Avinava, Varley, Jake, Ryoo, Michael, Lu, Yao, Liu, Frederick, Sindhwani, Vikas, Vuong, Quan, Sarlos, Tamas, Oslund, Ken, Hausman, Karol, Rao, Kanishka
We present Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT): a new paradigm for addressing the emerging challenge of scaling up Robotics Transformers (RT) for on-robot deployment. SARA-RT relies on the new method of fine-tuning prop
Externí odkaz:
http://arxiv.org/abs/2312.01990
Contemporary artificial intelligence systems exhibit rapidly growing abilities accompanied by the growth of required resources, expansive datasets and corresponding investments into computing infrastructure. Although earlier successes predominantly f
Externí odkaz:
http://arxiv.org/abs/2312.01939
Inspired by the success of transfer learning in computer vision, roboticists have investigated visual pre-training as a means to improve the learning efficiency and generalization ability of policies learned from pixels. To that end, past work has fa
Externí odkaz:
http://arxiv.org/abs/2312.12444