Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Pierre Sermanet"'
Self-supervised learning algorithms based on instance discrimination train encoders to be invariant to pre-defined transformations of the same instance. While most methods treat different views of the same image as positives for a contrastive loss, w
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::078021a4925e9b2ff5b9d37810315354
http://arxiv.org/abs/2104.14548
http://arxiv.org/abs/2104.14548
Publikováno v:
CVPR
We present an approach for estimating the period with which an action is repeated in a video. The crux of the approach lies in constraining the period prediction module to use temporal self-similarity as an intermediate representation bottleneck that
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cecba37100dcfb541eb266e0fa598bd4
http://arxiv.org/abs/2006.15418
http://arxiv.org/abs/2006.15418
Publikováno v:
ICRA
Learning meaningful visual representations in an embedding space can facilitate generalization in downstream tasks such as action segmentation and imitation. In this paper, we learn a motion-centric representation of surgical video demonstrations by
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::902c60cf3715d4603f3af291338fde6e
http://arxiv.org/abs/2006.00545
http://arxiv.org/abs/2006.00545
Publikováno v:
ICRA
We propose a self-supervised approach for learning representations of objects from monocular videos and demonstrate it is particularly useful for robotics. The main contributions of this paper are: 1) a self-supervised model called Object-Contrastive
Autor:
Corey Lynch, Pierre Sermanet
Publikováno v:
Robotics: Science and Systems
Natural language is perhaps the most flexible and intuitive way for humans to communicate tasks to a robot. Prior work in imitation learning typically requires each task be specified with a task id or goal image -- something that is often impractical
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ee0aa068767bc3f0f56c7613454bddca
Publikováno v:
CVPR
We introduce a self-supervised representation learning method based on the task of temporal alignment between videos. The method trains a network using temporal cycle consistency (TCC), a differentiable cycle-consistency loss that can be used to find
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::67d32692049c2b66c593db0943015f5a
http://arxiv.org/abs/1904.07846
http://arxiv.org/abs/1904.07846
Publikováno v:
IROS
In this work we explore a new approach for robots to teach themselves about the world simply by observing it. In particular we investigate the effectiveness of learning task-agnostic representations for continuous control tasks. We extend Time-Contra
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fa1350062bae313ea32624c28b4aed79
http://arxiv.org/abs/1808.00928
http://arxiv.org/abs/1808.00928
Autor:
Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, Google Brain
Publikováno v:
ICRA
We propose a self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints, and study how this representation can be used in two robotic imitation settings: imitating obje
Publikováno v:
CVPR Workshops
We propose a self-supervised approach for learning representations of relationships between humans and their environment, including object interactions, attributes, and body pose, entirely from unlabeled videos recorded from multiple viewpoints (Fig.
Publikováno v:
Robotics: Science and Systems
Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a reward function takes considerable hand engineering and of
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a2d1b0a549aa4fc94c078b677b673d1e
http://arxiv.org/abs/1612.06699
http://arxiv.org/abs/1612.06699