Zobrazeno 1 - 10
of 29 806
pro vyhledávání: '"A. Tompson"'
Autor:
Blackett-Ord, Carol, Turner, Simon
Publikováno v:
The Volume of the Walpole Society, 2008 Jan 01. 70, 1-206.
Externí odkaz:
https://www.jstor.org/stable/41830019
We discuss some consistent issues on how RepNet has been evaluated in various papers. As a way to mitigate these issues, we report RepNet performance results on different datasets, and release evaluation code and the RepNet checkpoint to obtain these
Externí odkaz:
http://arxiv.org/abs/2411.08878
Autor:
Ma, Yecheng Jason, Hejna, Joey, Wahid, Ayzaan, Fu, Chuyuan, Shah, Dhruv, Liang, Jacky, Xu, Zhuo, Kirmani, Sean, Xu, Peng, Driess, Danny, Xiao, Ted, Tompson, Jonathan, Bastani, Osbert, Jayaraman, Dinesh, Yu, Wenhao, Zhang, Tingnan, Sadigh, Dorsa, Xia, Fei
Predicting temporal progress from visual trajectories is important for intelligent robots that can learn, adapt, and improve. However, learning such progress estimator, or temporal value function, across different tasks and domains requires both a la
Externí odkaz:
http://arxiv.org/abs/2411.04549
Autor:
Zhao, Tony Z., Tompson, Jonathan, Driess, Danny, Florence, Pete, Ghasemipour, Kamyar, Finn, Chelsea, Wahid, Ayzaan
Recent work has shown promising results for learning end-to-end robot policies using imitation learning. In this work we address the question of how far can we push imitation learning for challenging dexterous manipulation tasks. We show that a simpl
Externí odkaz:
http://arxiv.org/abs/2410.13126
Publikováno v:
SeeNews Research & Profiles (Company Profiles). 2012, p7431-7433. 3p.
We introduce a dataset of annotations of temporal repetitions in videos. The dataset, OVR (pronounced as over), contains annotations for over 72K videos, with each annotation specifying the number of repetitions, the start and end time of the repetit
Externí odkaz:
http://arxiv.org/abs/2407.17085
Conference
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Pešić, Julijana
Publikováno v:
Зборник Матице српске за књижевност и језик / Matica Srpska Journal of Literature and Language. 59(1):147-158
Externí odkaz:
https://www.ceeol.com/search/article-detail?id=690662
We introduce a versatile $\textit{flexible-captioning}$ vision-language model (VLM) capable of generating region-specific descriptions of varying lengths. The model, FlexCap, is trained to produce length-conditioned captions for input bounding boxes,
Externí odkaz:
http://arxiv.org/abs/2403.12026
Autor:
Belkhale, Suneel, Ding, Tianli, Xiao, Ted, Sermanet, Pierre, Vuong, Quon, Tompson, Jonathan, Chebotar, Yevgen, Dwibedi, Debidatta, Sadigh, Dorsa
Language provides a way to break down complex concepts into digestible pieces. Recent works in robot imitation learning use language-conditioned policies that predict actions given visual observations and the high-level task specified in language. Th
Externí odkaz:
http://arxiv.org/abs/2403.01823