Improving Candidate Generation for Low-resource Cross-lingual Entity Linking
Autor: | Shruti Rijhwani, John Wieting, Jaime G. Carbonell, Graham Neubig, Shuyan Zhou |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Linguistics and Language Cross lingual Computer Science - Computation and Language business.industry Computer science Low resource Communication lcsh:P98-98.5 computer.software_genre Computer Science Applications Task (project management) Human-Computer Interaction Entity linking Knowledge base Artificial Intelligence Artificial intelligence lcsh:Computational linguistics. Natural language processing business computer Computation and Language (cs.CL) Natural language processing |
Zdroj: | Transactions of the Association for Computational Linguistics, Vol 8, Pp 109-124 (2020) |
Popis: | Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts. The first step of (X)EL is candidate generation, which retrieves a list of plausible candidate entities from the target-language KB for each mention. Approaches based on resources from Wikipedia have proven successful in the realm of relatively high-resource languages (HRL), but these do not extend well to low-resource languages (LRL) with few, if any, Wikipedia pages. Recently, transfer learning methods have been shown to reduce the demand for resources in the LRL by utilizing resources in closely-related languages, but the performance still lags far behind their high-resource counterparts. In this paper, we first assess the problems faced by current entity candidate generation methods for low-resource XEL, then propose three improvements that (1) reduce the disconnect between entity mentions and KB entries, and (2) improve the robustness of the model to low-resource scenarios. The methods are simple, but effective: we experiment with our approach on seven XEL datasets and find that they yield an average gain of 16.9% in Top-30 gold candidate recall, compared to state-of-the-art baselines. Our improved model also yields an average gain of 7.9% in in-KB accuracy of end-to-end XEL. Accepted to TACL 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |