Attention Visualizer Package: Revealing Word Importance for Deeper Insight into Encoder-Only Transformer Models
Autor: | Falaki, Ala Alam, Gras, Robin |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | This report introduces the Attention Visualizer package, which is crafted to visually illustrate the significance of individual words in encoder-only transformer-based models. In contrast to other methods that center on tokens and self-attention scores, our approach will examine the words and their impact on the final embedding representation. Libraries like this play a crucial role in enhancing the interpretability and explainability of neural networks. They offer the opportunity to illuminate their internal mechanisms, providing a better understanding of how they operate and can be enhanced. You can access the code and review examples on the following GitHub repository: https://github.com/AlaFalaki/AttentionVisualizer. Comment: 12 pages, 15 figures |
Databáze: | arXiv |
Externí odkaz: |