Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Asano, YM"'
Self-supervised visual representation learning has recently attracted significant research interest. While a common way to evaluate self-supervised representations is through transfer to various downstream tasks, we instead investigate the problem of
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______1064::849f33e384846a3a21ca018581a1d983
https://ora.ox.ac.uk/objects/uuid:dfdcf107-438a-4779-bd0c-e228db70bc61
https://ora.ox.ac.uk/objects/uuid:dfdcf107-438a-4779-bd0c-e228db70bc61
Computer vision has long relied on ImageNet and other large datasets of images sampled from the Internet for pretraining models. However, these datasets have ethical and technical shortcomings, such as containing personal information taken without co
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8d72aea4e8ea2b29b794cca9257e006e
https://ora.ox.ac.uk/objects/uuid:6106e1f0-7678-4771-9cb0-62527bce1b43
https://ora.ox.ac.uk/objects/uuid:6106e1f0-7678-4771-9cb0-62527bce1b43
Autor:
Asano, YM
The recent rise in machine learning has been largely made possible by novel algorithms, such as convolutional neural networks and large-scale labelled datasets. Yet obtaining labelled datasets is expensive, does not scale well, and should not be nece
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______1064::fe6f567c65a0be37e160e8fd2a0aeec3
https://ora.ox.ac.uk/objects/uuid:3afdf0e8-3239-436b-bd4e-c79a8b32d8cd
https://ora.ox.ac.uk/objects/uuid:3afdf0e8-3239-436b-bd4e-c79a8b32d8cd
The dominant paradigm for learning video-text representations -- noise contrastive learning -- increases the similarity of the representations of pairs of samples that are known to be related, such as text and video from the same sample, and pushes a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::464f030008fe3d7cce784e60a34a3b38
http://arxiv.org/abs/2010.02824
http://arxiv.org/abs/2010.02824
We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::03fa6b2d4371b9f8cc9d24ac9ebf9e09
https://ora.ox.ac.uk/objects/uuid:6a3bcf03-9360-433f-9dbf-3a6814a32412
https://ora.ox.ac.uk/objects/uuid:6a3bcf03-9360-433f-9dbf-3a6814a32412
A large part of the current success of deep learning lies in the effectiveness of data -- more precisely: labelled data. Yet, labelling a dataset with human annotation continues to carry high costs, especially for videos. While in the image domain, r
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4540978248b3d9f72b569ebba1c2711e
http://arxiv.org/abs/2006.13662
http://arxiv.org/abs/2006.13662
Autor:
Asano YM; Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, United Kingdom; yuki@robots.ox.ac.uk.; FutureLab on Game Theory and Networks of Interacting Agents, Potsdam Institute for Climate Impact Research, D-14412 Potsdam, Germany.; Business Administration and Economics, FernUniversität in Hagen, D-58097 Hagen, Germany., Kolb JJ; FutureLab on Game Theory and Networks of Interacting Agents, Potsdam Institute for Climate Impact Research, D-14412 Potsdam, Germany.; Department of Physics, Humboldt-Universität zu Berlin, D-10099 Berlin, Germany., Heitzig J; FutureLab on Game Theory and Networks of Interacting Agents, Potsdam Institute for Climate Impact Research, D-14412 Potsdam, Germany., Farmer JD; Institute for New Economic Thinking at the Oxford Martin School, University of Oxford, Oxford OX1 3UQ, United Kingdom.; Mathematical Institute, University of Oxford, Oxford OX2 6GG, United Kingdom.; Santa Fe Institute, Santa Fe, NM 87501.
Publikováno v:
Proceedings of the National Academy of Sciences of the United States of America [Proc Natl Acad Sci U S A] 2021 Jul 06; Vol. 118 (27).