Open-set long-tailed recognition via orthogonal prototype learning and false rejection correction.
Autor: | Deng B; School of Computer and information Engineering, Henan University, 475004, Kaifeng, China. Electronic address: bqdeng@henu.edu.cn., Kamel A; School of Computer and information Engineering, Henan University, 475004, Kaifeng, China. Electronic address: kameldz40@gmail.com., Zhang C; School of Computer and information Engineering, Henan University, 475004, Kaifeng, China. Electronic address: cszhang@ieee.org. |
---|---|
Jazyk: | angličtina |
Zdroj: | Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2024 Oct 11; Vol. 181, pp. 106789. Date of Electronic Publication: 2024 Oct 11. |
DOI: | 10.1016/j.neunet.2024.106789 |
Abstrakt: | Learning from data with long-tailed and open-ended distributions is highly challenging. In this work, we propose OLPR, which is a new dual-stream Open-set Long-tailed recognition framework based on orthogonal Prototype learning and false Rejection correction. It consists of a Probabilistic Prediction Learning (PPL) branch and a Distance Metric Learning (DML) branch. The former is used to generate prediction probability for image classification. The latter learns orthogonal prototypes for each class by computing three distance losses, which are the orthogonal prototype loss among all the prototypes, the balanced Softmin distance based cross-entropy loss between each prototype and its corresponding input sample, and the adversarial loss for making the open-set space more compact. Furthermore, for open-set learning, instead of merely relying on binary decisions, we propose an Iterative Clustering Module (ICM) to categorize similar open-set samples and correct the false rejected closed-set samples simultaneously. If a sample is detected as a false rejection, i.e., a sample of the known classes is incorrectly identified as belonging to the unknown classes, we will re-classify the sample to the closest known/closed-set class. We conduct extensive experiments on ImageNet-LT, Places-LT, CIFAR-10/100-LT benchmark datasets, as well as a new long-tailed open-ended dataset that we build. Experimental results demonstrate that OLPR improves over the best competitors by up to 2.2% in terms of overall classification accuracy in closed-set settings, and up to 4% in terms of F-measure in open-set settings, which are very remarkable. Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. (Copyright © 2024. Published by Elsevier Ltd.) |
Databáze: | MEDLINE |
Externí odkaz: |