Autor: |
Rishabh Bhardwaj, Tushar Vaidya, Soujanya Poria |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
Journal of King Saud University: Computer and Information Sciences, Vol 34, Iss 10, Pp 10434-10443 (2022) |
Druh dokumentu: |
article |
ISSN: |
1319-1578 |
DOI: |
10.1016/j.jksuci.2022.10.031 |
Popis: |
Loss functions are essential to computing the divergence of a model’s predicted distribution from the ground truth. Such functions play a vital role in machine learning algorithms as they steer the learning process. Most common loss functions in natural language processing (NLP), such as Kullback–Leibler (KL) and Jensen–Shannon (JS) divergences, do not base their computations on the properties of label coordinates. Label coordinates can help encode the inter-label relationships. For the sentiment classification task, strongly positive sentiment is closer to positive than strongly negative sentiment. Incorporating such information in the computations of the probability divergence can facilitate the model’s learning dynamics.In this work, we study an under-explored loss function in NLP — Wasserstein Optimal Transport (OT) — which takes label coordinates into account and thus allows the learning algorithm to incorporate inter-label relations. However, the limited applications of OT-based loss owe to the challenges in defining quality label coordinates. We explore the current limitations of learning with OT and provide an algorithm that jointly learns label coordinates with the model parameters. We show the efficacy of OT on several text classification tasks such as sentiment analysis and emotion recognition in conversation. We also discuss the limitations of the approach. The source codes pertaining to this work are publicly available at: https://github.com/declare-lab/NLP-OT. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|