Zobrazeno 1 - 10
of 280
pro vyhledávání: '"Komodakis, Nikos"'
Autor:
Schmidt-Mengin, Marius, Benichoux, Alexis, Belachew, Shibeshih, Komodakis, Nikos, Paragios, Nikos
Annotating lots of 3D medical images for training segmentation models is time-consuming. The goal of weakly supervised semantic segmentation is to train segmentation models without using any ground truth segmentation masks. Our work addresses the cas
Externí odkaz:
http://arxiv.org/abs/2404.13103
Unsupervised object-centric learning aims to decompose scenes into interpretable object entities, termed slots. Slot-based auto-encoders stand out as a prominent method for this task. Within them, crucial aspects include guiding the encoder to genera
Externí odkaz:
http://arxiv.org/abs/2312.00648
Autor:
Gidaris, Spyros, Bursuc, Andrei, Simeoni, Oriane, Vobecky, Antonin, Komodakis, Nikos, Cord, Matthieu, Pérez, Patrick
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks for very large fully-annotated datasets. Different classes of self-supervised learning offer representations with either good contextual reasoning pro
Externí odkaz:
http://arxiv.org/abs/2307.09361
Autor:
Kakogeorgiou, Ioannis, Gidaris, Spyros, Psomas, Bill, Avrithis, Yannis, Bursuc, Andrei, Karantzalos, Konstantinos, Komodakis, Nikos
Publikováno v:
European Conference on Computer Vision (2022)
Transformers and masked language modeling are quickly being adopted and explored in computer vision as vision transformers and masked image modeling (MIM). In this work, we argue that image token masking differs from token masking in text, due to the
Externí odkaz:
http://arxiv.org/abs/2203.12719
Autor:
Gidaris, Spyros, Bursuc, Andrei, Puy, Gilles, Komodakis, Nikos, Cord, Matthieu, Pérez, Patrick
Learning image representations without human supervision is an important and active research field. Several recent approaches have successfully leveraged the idea of making such a representation invariant under different types of perturbations, espec
Externí odkaz:
http://arxiv.org/abs/2012.11552
Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data. Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image d
Externí odkaz:
http://arxiv.org/abs/2002.12247
Knowledge distillation refers to the process of training a compact student network to achieve better accuracy by learning from a high capacity teacher network. Most of the existing knowledge distillation methods direct the student to follow the teach
Externí odkaz:
http://arxiv.org/abs/1912.01540
Autor:
Rana, Aakanksha, Singh, Praveer, Valenzise, Giuseppe, Dufaux, Frederic, Komodakis, Nikos, Smolic, Aljosa
A computationally fast tone mapping operator (TMO) that can quickly adapt to a wide spectrum of high dynamic range (HDR) content is quintessential for visualization on varied low dynamic range (LDR) output devices such as movie screens or standard di
Externí odkaz:
http://arxiv.org/abs/1908.04197
Few-shot learning and self-supervised learning address different facets of the same problem: how to train a model with little or no labeled data. Few-shot learning aims for optimization methods and models that can learn efficiently to recognize patte
Externí odkaz:
http://arxiv.org/abs/1906.05186