How Low is Too Low? A Computational Perspective on Extremely Low-Resource Languages
Autor: | Niko Schenk, Himanshu Choudhary, Rachit Bansal, Ravneet Punia, Jacob L. Dahl, Émilie Pagé-Perron |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language Machine translation business.industry Computer science Deep learning computer.software_genre Pipeline (software) language.human_language Domain (software engineering) Information extraction Named-entity recognition language Sumerian Artificial intelligence business Computation and Language (cs.CL) computer Natural language processing Interpretability |
Zdroj: | ACL (student) |
Popis: | Despite the recent advancements of attention-based deep learning architectures across a majority of Natural Language Processing tasks, their application remains limited in a low-resource setting because of a lack of pre-trained models for such languages. In this study, we make the first attempt to investigate the challenges of adapting these techniques for an extremely low-resource language -- Sumerian cuneiform -- one of the world's oldest written languages attested from at least the beginning of the 3rd millennium BC. Specifically, we introduce the first cross-lingual information extraction pipeline for Sumerian, which includes part-of-speech tagging, named entity recognition, and machine translation. We further curate InterpretLR, an interpretability toolkit for low-resource NLP, and use it alongside human attributions to make sense of the models. We emphasize on human evaluations to gauge all our techniques. Notably, most components of our pipeline can be generalised to any other language to obtain an interpretable execution of the techniques, especially in a low-resource setting. We publicly release all software, model checkpoints, and a novel dataset with domain-specific pre-processing to promote further research. Comment: ACL SRW 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |