Zobrazeno 1 - 10
of 121
pro vyhledávání: '"Xu, Chengjin"'
Automatic chart understanding is crucial for content comprehension and document parsing. Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in chart understanding through domain-specific alignment and fine-tuning. Howe
Externí odkaz:
http://arxiv.org/abs/2409.03277
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in processing and generating content across multiple data modalities. However, a significant drawback of MLLMs is their reliance on static training data, leading to ou
Externí odkaz:
http://arxiv.org/abs/2407.21439
Autor:
Ma, Shengjie, Xu, Chengjin, Jiang, Xuhui, Li, Muzhi, Qu, Huaren, Yang, Cehao, Mao, Jiaxin, Guo, Jian
Retrieval-augmented generation (RAG) has enhanced large language models (LLMs) by using knowledge retrieval to address knowledge gaps. However, existing RAG approaches often fail to ensure the depth and completeness of the information retrieved, whic
Externí odkaz:
http://arxiv.org/abs/2407.10805
Artificial intelligence is making significant strides in the finance industry, revolutionizing how data is processed and interpreted. Among these technologies, large language models (LLMs) have demonstrated substantial potential to transform financia
Externí odkaz:
http://arxiv.org/abs/2407.00365
Knowledge Graphs (KGs) are foundational structures in many AI applications, representing entities and their interrelations through triples. However, triple-based KGs lack the contextual information of relational knowledge, like temporal dynamics and
Externí odkaz:
http://arxiv.org/abs/2406.11160
Autor:
Luo, Yi, Lin, Zhenghao, Zhang, Yuhao, Sun, Jiashuo, Lin, Chen, Xu, Chengjin, Su, Xiangdong, Shen, Yelong, Guo, Jian, Gong, Yeyun
Large Language Models (LLMs) exhibit impressive capabilities but also present risks such as biased content generation and privacy issues. One of the current alignment techniques includes principle-driven integration, but it faces challenges arising f
Externí odkaz:
http://arxiv.org/abs/2403.11838
Autor:
Jiang, Xuhui, Shen, Yinghan, Shi, Zhichao, Xu, Chengjin, Li, Wei, Li, Zixuan, Guo, Jian, Shen, Huawei, Wang, Yuanzhuo
Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG) data, playing a crucial role in data-driven AI applications. Traditional EA methods primarily rely on comparing entity embeddings, but their effectiveness is constrained by t
Externí odkaz:
http://arxiv.org/abs/2402.15048
Hallucinations in large language models (LLMs) are always seen as limitations. However, could they also be a source of creativity? This survey explores this possibility, suggesting that hallucinations may contribute to LLM application by fostering cr
Externí odkaz:
http://arxiv.org/abs/2402.06647
Multimodal Large Language Models (MLLMs) have shown impressive capabilities in image understanding and generation. However, current benchmarks fail to accurately evaluate the chart comprehension of MLLMs due to limited chart types and inappropriate m
Externí odkaz:
http://arxiv.org/abs/2312.15915
Autor:
Jiang, Xuhui, Xu, Chengjin, Shen, Yinghan, Sun, Xun, Tang, Lumingyuan, Wang, Saizhuo, Chen, Zhongwu, Wang, Yuanzhuo, Guo, Jian
Knowledge graphs (KGs) are structured representations of diversified knowledge. They are widely used in various intelligent applications. In this article, we provide a comprehensive survey on the evolution of various types of knowledge graphs (i.e.,
Externí odkaz:
http://arxiv.org/abs/2310.04835