Language Models as Knowledge Bases?
Autor: | Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Yuxiang Wu, Anton Bakhtin, Fabio Petroni, Alexander H. Miller |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language Training set Recall business.industry Computer science 02 engineering and technology computer.software_genre Oracle 03 medical and health sciences 0302 clinical medicine Schema (psychology) 030221 ophthalmology & optometry 0202 electrical engineering electronic engineering information engineering Question answering 020201 artificial intelligence & image processing Artificial intelligence Language model business Computation and Language (cs.CL) computer Natural language processing |
Zdroj: | EMNLP/IJCNLP (1) Scopus-Elsevier |
DOI: | 10.18653/v1/d19-1250 |
Popis: | Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA. Comment: accepted at EMNLP 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |