RSRNeT: a novel multi-modal network framework for named entity recognition and relation extraction

Autor: Min Wang, Hongbin Chen, Dingcai Shen, Baolei Li, Shiyu Hu
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: PeerJ Computer Science, Vol 10, p e1856 (2024)
Druh dokumentu: article
ISSN: 2376-5992
DOI: 10.7717/peerj-cs.1856
Popis: Named entity recognition (NER) and relation extraction (RE) are two important technologies employed in knowledge extraction for constructing knowledge graphs. Uni-modal NER and RE approaches solely rely on text information for knowledge extraction, leading to various limitations, such as suboptimal performance and low efficiency in recognizing polysemous words. With the development of multi-modal learning, multi-modal named entity recognition (MNER) and multi-modal relation extraction (MRE) have been introduced to improve recognition performance. However, existing MNER and MRE methods often encounter reduced efficiency when the text includes unrelated images. To address this problem, we propose a novel multi-modal network framework for named entity recognition and relation extraction called RSRNeT. In RSRNeT, we focus on extracting visual features more fully and designing a multi-scale visual feature extraction module based on ResNeSt network. On the other hand, we also emphasize fusing multi-modal features more comprehensively while minimizing interference from irrelevant images. To address this issue, we propose a multi-modal feature fusing module based on RoBERTa network. These two modules enable us to learn superior visual and textual representations, reducing errors caused by irrelevant images. Our approach has undergone extensive evaluation and comparison with various baseline models on MNER and MRE tasks. Experimental results show that our method achieves state-of-the-art performance in recall and F1 score on three public datasets: Twitter2015, Twitter2017 and MNRE.
Databáze: Directory of Open Access Journals