Autor: |
Naim, Omar, Asher, Nicholas |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Proceedings of ECAI 2024, Frontiers in Artificial Intelligence and Applications, pp. 1035-1042 |
Druh dokumentu: |
Working Paper |
DOI: |
10.3233/FAIA240594 |
Popis: |
This paper explores the much discussed, possible explanatory link between attention weights (AW) in transformer models and predicted output. Contrary to intuition and early research on attention, more recent prior research has provided formal arguments and empirical evidence that AW are not explanatorily relevant. We show that the formal arguments are incorrect. We introduce and effectively compute efficient attention, which isolates the effective components of attention matrices in tasks and models in which AW play an explanatory role. We show that efficient attention has a causal role (provides minimally necessary and sufficient conditions) for predicting model output in NLP tasks requiring contextual information, and we show, contrary to [7], that efficient attention matrices are probability distributions and are effectively calculable. Thus, they should play an important part in the explanation of attention based model behavior. We offer empirical experiments in support of our method illustrating various properties of efficient attention with various metrics on four datasets. |
Databáze: |
arXiv |
Externí odkaz: |
|