LAMBERT: Layout-Aware (Language) Modeling for information extraction

Autor: Garncarek, Łukasz, Powalski, Rafał, Stanisławek, Tomasz, Topolski, Bartosz, Halama, Piotr, Turski, Michał, Graliński, Filip
Rok vydání: 2020
Předmět:
Zdroj: In: Llad\'os J., Lopresti D., Uchida S. (eds) Document Analysis and Recognition - ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science, vol 12821. Springer, Cham
Druh dokumentu: Working Paper
DOI: 10.1007/978-3-030-86549-8_34
Popis: We introduce a simple new approach to the problem of understanding documents where non-trivial layout influences the local semantics. To this end, we modify the Transformer encoder architecture in a way that allows it to use layout features obtained from an OCR system, without the need to re-learn language semantics from scratch. We only augment the input of the model with the coordinates of token bounding boxes, avoiding, in this way, the use of raw images. This leads to a layout-aware language model which can then be fine-tuned on downstream tasks. The model is evaluated on an end-to-end information extraction task using four publicly available datasets: Kleister NDA, Kleister Charity, SROIE and CORD. We show that our model achieves superior performance on datasets consisting of visually rich documents, while also outperforming the baseline RoBERTa on documents with flat layout (NDA \(F_{1}\) increase from 78.50 to 80.42). Our solution ranked first on the public leaderboard for the Key Information Extraction from the SROIE dataset, improving the SOTA \(F_{1}\)-score from 97.81 to 98.17.
Comment: accepted to ICDAR 2021
Databáze: arXiv