Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning
Autor: | Hexiang Hu, Han-Jia Ye, De-Chuan Zhan |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Ideal (set theory) Computer science business.industry Head (linguistics) Computer Vision and Pattern Recognition (cs.CV) Shot (filmmaking) Learning to learn Computer Science - Computer Vision and Pattern Recognition Cognitive neuroscience of visual object recognition 02 engineering and technology Machine learning computer.software_genre Machine Learning (cs.LG) Artificial Intelligence Pattern recognition (psychology) Classifier (linguistics) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Computer Vision and Pattern Recognition Artificial intelligence business computer Software |
Zdroj: | International Journal of Computer Vision. 129:1930-1953 |
ISSN: | 1573-1405 0920-5691 |
Popis: | Object recognition in the real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to recognize the populated head visual concepts reliably and meanwhile efficiently learn about emerging new tail categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, by either learning strong classifiers for head or learning to learn few-shot classifiers for the tail. In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes. We propose the ClAssifier SynThesis LEarning (CASTLE), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of head classes with a shared neural dictionary, shedding light upon the inductive GFSL. Furthermore, we propose an adaptive version of CASTLE (ACASTLE) that adapts the head classifiers conditioned on the incoming tail training examples, yielding a framework that allows effective backward knowledge transfer. As a consequence, ACASTLE can handle GFSL with classes from heterogeneous domains effectively. CASTLE and ACASTLE demonstrate superior performances than existing GFSL algorithms and strong baselines on MiniImageNet as well as TieredImageNet datasets. More interestingly, they outperform previous state-of-the-art methods when evaluated with standard few-shot learning criteria. Accepted by IJCV; The code is available at https://github.com/Sha-Lab/aCASTLE |
Databáze: | OpenAIRE |
Externí odkaz: |