Updater-Extractor Architecture for Inductive World State Representations
Autor: | Moskvichev, Arseny, Liu, James A. |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Developing NLP models traditionally involves two stages - training and application. Retention of information acquired after training (at application time) is architecturally limited by the size of the model's context window (in the case of transformers), or by the practical difficulties associated with long sequences (in the case of RNNs). In this paper, we propose a novel transformer-based Updater-Extractor architecture and a training procedure that can work with sequences of arbitrary length and refine its knowledge about the world based on linguistic inputs. We explicitly train the model to incorporate incoming information into its world state representation, obtaining strong inductive generalization and the ability to handle extremely long-range dependencies. We prove a lemma that provides a theoretical basis for our approach. The result also provides insight into success and failure modes of models trained with variants of Truncated Back-Propagation Through Time (such as Transformer XL). Empirically, we investigate the model performance on three different tasks, demonstrating its promise. This preprint is still a work in progress. At present, we focused on easily interpretable tasks, leaving the application of the proposed ideas to practical NLP applications for the future. Comment: 15 pages (12 main content, 3 references and appendix), 4 figures |
Databáze: | arXiv |
Externí odkaz: |