MV–MR: Multi-Views and Multi-Representations for Self-Supervised Learning and Knowledge Distillation

Autor: Vitaliy Kinakh, Mariia Drozdova, Slava Voloshynovskiy
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Entropy, Vol 26, Iss 6, p 466 (2024)
Druh dokumentu: article
ISSN: 1099-4300
DOI: 10.3390/e26060466
Popis: We present a new method of self-supervised learning and knowledge distillation based on multi-views and multi-representations (MV–MR). MV–MR is based on the maximization of dependence between learnable embeddings from augmented and non-augmented views, jointly with the maximization of dependence between learnable embeddings from the augmented view and multiple non-learnable representations from the non-augmented view. We show that the proposed method can be used for efficient self-supervised classification and model-agnostic knowledge distillation. Unlike other self-supervised techniques, our approach does not use any contrastive learning, clustering, or stop gradients. MV–MR is a generic framework allowing the incorporation of constraints on the learnable embeddings via the usage of image multi-representations as regularizers. The proposed method is used for knowledge distillation. MV–MR provides state-of-the-art self-supervised performance on the STL10 and CIFAR20 datasets in a linear evaluation setup. We show that a low-complexity ResNet50 model pretrained using proposed knowledge distillation based on the CLIP ViT model achieves state-of-the-art performance on STL10 and CIFAR100 datasets.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje