Multimodal Prototypical Networks for Few-shot Learning
Autor: | Mihai Puscas, Moin Nabi, Tassilo Klein, Frederik Pahde |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Exploit Computer science Feature vector Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences 050105 experimental psychology k-nearest neighbors algorithm Machine Learning (cs.LG) 0501 psychology and cognitive sciences 0105 earth and related environmental sciences Modality (human–computer interaction) Modalities business.industry Deep learning 05 social sciences Visualization Generative model Artificial intelligence business computer |
Zdroj: | WACV |
DOI: | 10.48550/arxiv.2011.08899 |
Popis: | Although providing exceptional results for many computer vision tasks, state-of-the-art deep learning algorithms catastrophically struggle in low data scenarios. However, if data in additional modalities exist (e.g. text) this can compensate for the lack of data and improve the classification results. To overcome this data scarcity, we design a cross-modal feature generation framework capable of enriching the low populated embedding space in few-shot scenarios, leveraging data from the auxiliary modality. Specifically, we train a generative model that maps text data into the visual feature space to obtain more reliable prototypes. This allows to exploit data from additional modalities (e.g. text) during training while the ultimate task at test time remains classification with exclusively visual data. We show that in such cases nearest neighbor classification is a viable approach and outperform state-of-the-art single-modal and multimodal few-shot learning methods on the CUB-200 and Oxford-102 datasets. Comment: To appear at WACV 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |