Zobrazeno 1 - 10
of 38
pro vyhledávání: '"Guo, Lingbing"'
Autor:
Zhang, Yichi, Chen, Zhuo, Guo, Lingbing, Xu, Yajing, Hu, Binbin, Liu, Ziqi, Zhang, Wen, Chen, Huajun
Multi-modal knowledge graph completion (MMKGC) aims to automatically discover new knowledge triples in the given multi-modal knowledge graphs (MMKGs), which is achieved by collaborative modeling the structural information concealed in massive triples
Externí odkaz:
http://arxiv.org/abs/2405.16869
Autor:
Zhang, Yichi, Hu, Binbin, Chen, Zhuo, Guo, Lingbing, Liu, Ziqi, Zhang, Zhiqiang, Liang, Lei, Chen, Huajun, Zhang, Wen
Knowledge graphs (KGs) provide reliable external knowledge for a wide variety of AI tasks in the form of structured triples. Knowledge graph pre-training (KGP) aims to pre-train neural networks on large-scale KGs and provide unified interfaces to enh
Externí odkaz:
http://arxiv.org/abs/2405.13085
Autor:
Zhang, Yichi, Chen, Zhuo, Guo, Lingbing, Xu, Yajing, Hu, Binbin, Liu, Ziqi, Chen, Huajun, Zhang, Wen
Multi-modal knowledge graphs (MMKG) store structured world knowledge containing rich multi-modal descriptive information. To overcome their inherent incompleteness, multi-modal knowledge graph completion (MMKGC) aims to discover unobserved knowledge
Externí odkaz:
http://arxiv.org/abs/2404.09468
Autor:
Zhang, Yichi, Chen, Zhuo, Guo, Lingbing, Xu, Yajing, Hu, Binbin, Liu, Ziqi, Zhang, Wen, Chen, Huajun
Multi-modal knowledge graph completion (MMKGC) aims to automatically discover the unobserved factual knowledge from a given multi-modal knowledge graph by collaboratively modeling the triple structure and multi-modal information from entities. Howeve
Externí odkaz:
http://arxiv.org/abs/2406.17605
The advancement of Multi-modal Pre-training highlights the necessity for a robust Multi-Modal Knowledge Graph (MMKG) representation learning framework. This framework is crucial for integrating structured knowledge into multi-modal Large Language Mod
Externí odkaz:
http://arxiv.org/abs/2403.06832
Entity alignment (EA) aims to identify entities across different knowledge graphs that represent the same real-world objects. Recent embedding-based EA methods have achieved state-of-the-art performance in EA yet faced interpretability challenges as
Externí odkaz:
http://arxiv.org/abs/2402.11000
Autor:
Chen, Zhuo, Zhang, Yichi, Fang, Yin, Geng, Yuxia, Guo, Lingbing, Chen, Xiang, Li, Qian, Zhang, Wen, Chen, Jiaoyan, Zhu, Yushan, Li, Jiaqi, Liu, Xiaoze, Pan, Jeff Z., Zhang, Ningyu, Chen, Huajun
Knowledge Graphs (KGs) play a pivotal role in advancing various AI applications, with the semantic web community's exploration into multi-modal dimensions unlocking new avenues for innovation. In this survey, we carefully review over 300 articles, fo
Externí odkaz:
http://arxiv.org/abs/2402.05391
Publikováno v:
Data Intelligence, Vol 1, Iss 3, Pp 289-308 (2019)
Knowledge graph (KG) completion aims at filling the missing facts in a KG, where a fact is typically represented as a triple in the form of ( head, relation, tail). Traditional KG completion methods compel two-thirds of a triple provided (e.g., head
Externí odkaz:
https://doaj.org/article/65efd4ad54fc42bb9396a6fafdfbee17
Large language model (LLM) based knowledge graph completion (KGC) aims to predict the missing triples in the KGs with LLMs. However, research about LLM-based KGC fails to sufficiently harness LLMs' inference proficiencies, overlooking critical struct
Externí odkaz:
http://arxiv.org/abs/2310.06671
The objective of Entity Alignment (EA) is to identify equivalent entity pairs from multiple Knowledge Graphs (KGs) and create a more comprehensive and unified KG. The majority of EA methods have primarily focused on the structural modality of KGs, la
Externí odkaz:
http://arxiv.org/abs/2310.05364