Continual knowledge infusion into pre-trained biomedical language models.
Autor: | Jha K; Department of Computer Science, University of Virginia, Charlottesville, VA 22903, USA., Zhang A; Department of Computer Science, University of Virginia, Charlottesville, VA 22903, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Bioinformatics (Oxford, England) [Bioinformatics] 2022 Jan 03; Vol. 38 (2), pp. 494-502. |
DOI: | 10.1093/bioinformatics/btab671 |
Abstrakt: | Motivation: Biomedical language models produce meaningful concept representations that are useful for a variety of biomedical natural language processing (bioNLP) applications such as named entity recognition, relationship extraction and question answering. Recent research trends have shown that the contextualized language models (e.g. BioBERT, BioELMo) possess tremendous representational power and are able to achieve impressive accuracy gains. However, these models are still unable to learn high-quality representations for concepts with low context information (i.e. rare words). Infusing the complementary information from knowledge-bases (KBs) is likely to be helpful when the corpus-specific information is insufficient to learn robust representations. Moreover, as the biomedical domain contains numerous KBs, it is imperative to develop approaches that can integrate the KBs in a continual fashion. Results: We propose a new representation learning approach that progressively fuses the semantic information from multiple KBs into the pretrained biomedical language models. Since most of the KBs in the biomedical domain are expressed as parent-child hierarchies, we choose to model the hierarchical KBs and propose a new knowledge modeling strategy that encodes their topological properties at a granular level. Moreover, the proposed continual learning technique efficiently updates the concepts representations to accommodate the new knowledge while preserving the memory efficiency of contextualized language models. Altogether, the proposed approach generates knowledge-powered embeddings with high fidelity and learning efficiency. Extensive experiments conducted on bioNLP tasks validate the efficacy of the proposed approach and demonstrates its capability in generating robust concept representations. (© The Author(s) 2021. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.) |
Databáze: | MEDLINE |
Externí odkaz: |