TC-Net: Detecting Noisy Labels Via Transform Consistency.

Autor: Yi, Rumeng, Huang, Yaping
Zdroj: IEEE Transactions on Multimedia; 2022, Vol. 24, p4328-4341, 14p
Abstrakt: It is crucial to distinguish mislabeled samples for dealing with noisy labels. Previous methods such as “Co-teaching” and “JoCoR” introduce two different networks to select clean samples out of the noisy ones and only use these clean samples to train the deep models. Different from these methods which require to train two networks simultaneously, we propose a simple and effective method to identify clean samples only using one single network. We discover that the clean samples prefer to reach consistent predictions for the original images and the transformed images while noisy samples usually suffer from inconsistent predictions. Motivated by this observation, we propose a noisy label detection approach, named Transform Consistency Network (TC-Net), which constrains the transform consistency (i.e., category consistency and visual attention consistency) between the original images and the transformed images for network training. Then we can select small-loss samples to update the parameters of the network. Furthermore, in order to mitigate the negative influence of noisy labels, we design a classification loss by using the off-line hard labels and on-line soft labels to provide more reliable supervisions for training a robust model. We conduct comprehensive experiments on CIFAR-10, CIFAR-100 and Clothing1M datasets. Compared with the clean sample selection baselines, we achieve the state-of-the-art performance. Especially, in most cases, our proposed method outperforms the baselines by a large margin. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index