Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Yusuf Aytar"'
Self-supervised learning algorithms based on instance discrimination train encoders to be invariant to pre-defined transformations of the same instance. While most methods treat different views of the same image as positives for a contrastive loss, w
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::078021a4925e9b2ff5b9d37810315354
http://arxiv.org/abs/2104.14548
http://arxiv.org/abs/2104.14548
Publikováno v:
Robotics: Science and Systems
Imitation learning is an effective tool for robotic learning tasks where specifying a reinforcement learning (RL) reward is not feasible or where the exploration problem is particularly difficult. Imitation, typically behavior cloning or inverse RL,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::adca0fb288bff4c4349ba0f3099100b4
Publikováno v:
CVPR
We present an approach for estimating the period with which an action is repeated in a video. The crux of the approach lies in constraining the period prediction module to use temporal self-similarity as an intermediate representation bottleneck that
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cecba37100dcfb541eb266e0fa598bd4
http://arxiv.org/abs/2006.15418
http://arxiv.org/abs/2006.15418
Autor:
Thomas Lampe, Yusuf Aytar, Jackie Kay, Konstantinos Bousmalis, Yuxiang Zhou, Rae Jeong, David Khosid, Francesco Nori
Publikováno v:
ICRA
Collecting and automatically obtaining reward signals from real robotic visual data for the purposes of training reinforcement learning algorithms can be quite challenging and time-consuming. Methods for utilizing unlabeled data can have a huge poten
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6f11610deddc8cdc230ba99eac4b212c
http://arxiv.org/abs/1910.09470
http://arxiv.org/abs/1910.09470
Autor:
Nicholas Hynes, Ferda Ofli, Yusuf Aytar, Amaia Salvador, Ingmar Weber, Javier Marin, Aritro Biswas, Antonio Torralba
Publikováno v:
MIT web domain
In this paper, we introduce Recipe1M+, a new large-scale, structured corpus of over one million cooking recipes and 13 million food images. As the largest publicly available collection of recipe data, Recipe1M+ affords the ability to train high-capac
Publikováno v:
CVPR
We introduce a self-supervised representation learning method based on the task of temporal alignment between videos. The method trains a network using temporal cycle consistency (TCC), a differentiable cycle-consistency loss that can be used to find
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::67d32692049c2b66c593db0943015f5a
http://arxiv.org/abs/1904.07846
http://arxiv.org/abs/1904.07846
Autor:
Misha Denil, David J. Barker, Sergio Gomez Colmenarejo, Ziyu Wang, Nando de Freitas, Ksenia Konyushova, Mel Vecerik, Serkan Cabi, David Budden, Jonathan Scholz, Alexander Novikov, Scott Reed, Yusuf Aytar, Oleg P. Sushkov, Rae Jeong, Konrad Zolna
Publikováno v:
Robotics: Science and Systems
We present a framework for data-driven robotics that makes use of a large dataset of recorded robot experience and scales to several tasks using learned reward functions. We show how to apply this framework to accomplish three different object manipu
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::949a18797a593e4bc0f952ace525ffc7
Autor:
Yusuf Aytar, Andrew Zisserman
Publikováno v:
Computer Vision and Image Understanding. 138:114-123
EE-SVM, a part based transfer regularization method that boosts E-SVM, is introduced.EE-SVM is further improved by transferring the statistics between the parts.All the proposed objectives result in convex formulations.Experimentally shown that EE-SV
Publikováno v:
other univ website
People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal sc
Autor:
Ferda Ofli, Amaia Salvador, Javier Marin, Ingmar Weber, Yusuf Aytar, Antonio Torralba, Nicholas Hynes
Publikováno v:
MIT web domain
CVPR
CVPR
In this paper, we introduce Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on ali
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a897ad920c8f55a1421fa532258547b3
https://hdl.handle.net/1721.1/122660
https://hdl.handle.net/1721.1/122660