Autor: |
Wang, Quan, Mao, Zhendong, Gao, Jie, Zhang, Yongdong |
Zdroj: |
ACM Transactions on Information Systems; Nov2024, Vol. 42 Issue 6, p1-34, 34p |
Abstrakt: |
The article focuses on a training regime called Progressive Self-Distillation (PSD), which enhances document-level relation extraction (RE) by leveraging soft labels generated from an RE model's own predictions over time. Topics include the limitations of hard-label training for capturing nuanced relationships in the no-relation (NR) class and correlations among other relation classes, how PSD uses self-knowledge distillation to soften labels progressively during training. |
Databáze: |
Complementary Index |
Externí odkaz: |
|