Automatic Evaluation of Interpretability Methods in Text Categorization.

Autor: Rogov, A., Loukachevitch, N.
Předmět:
Zdroj: Journal of Mathematical Sciences; Oct2024, Vol. 285 Issue 2, p201-209, 9p
Abstrakt: Neural networks have begun to take over more and more of a person's everyday life, and the complexity of neural networks is only increasing. When tested on collected test data, the model can show quite decent performance, but when used in real-life conditions, it can give completely unexpected results. To determine the cause of the error, it is important to know how the model makes its decisions. In this work, we consider various methods of interpreting the BERT model in classification tasks, and also consider a method for evaluating interpretation methods using vector representations fastText and GloVe. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index