CAMRI Loss: Improving the Recall of a Specific Class without Sacrificing Accuracy
Autor: | Daiki Nishiyama, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | IEICE Transactions on Information and Systems. :523-537 |
ISSN: | 1745-1361 0916-8532 |
Popis: | In real-world applications of multi-class classification models, misclassification in an important class (e.g., stop sign) can be significantly more harmful than in other classes (e.g., speed limit). In this paper, we propose a loss function that can improve the recall of an important class while maintaining the same level of accuracy as the case using cross-entropy loss. For our purpose, we need to make the separation of the important class better than the other classes. However, existing methods that give a class-sensitive penalty for cross-entropy loss do not improve the separation. On the other hand, the method that gives a margin to the angle between the feature vectors and the weight vectors of the last fully connected layer corresponding to each feature can improve the separation. Therefore, we propose a loss function that can improve the separation of the important class by setting the margin only for the important class, called Class-sensitive Additive Angular Margin Loss (CAMRI Loss). CAMRI loss is expected to reduce the variance of angles between features and weights of the important class relative to other classes due to the margin around the important class in the feature space by adding a penalty to the angle. In addition, concentrating the penalty only on the important classes hardly sacrifices the separation of the other classes. Experiments on CIFAR-10, GTSRB, and AwA2 showed that the proposed method could improve up to 9% recall improvement on cross-entropy loss without sacrificing accuracy. 2022 International Joint Conference on Neural Networks (IJCNN 2022) |
Databáze: | OpenAIRE |
Externí odkaz: |