Zobrazeno 1 - 10
of 742
pro vyhledávání: '"Chen, Huajun"'
Autor:
Zhang, Wen, Jin, Long, Zhu, Yushan, Chen, Jiaoyan, Huang, Zhiwei, Wang, Junjie, Hua, Yin, Liang, Lei, Chen, Huajun
Natural language question answering (QA) over structured data sources such as tables and knowledge graphs (KGs) have been widely investigated, for example with Large Language Models (LLMs). The main solutions include question to formal query parsing
Externí odkaz:
http://arxiv.org/abs/2406.18916
Autor:
Zhou, Wangchunshu, Ou, Yixin, Ding, Shengwei, Li, Long, Wu, Jialong, Wang, Tiannan, Chen, Jiamin, Wang, Shuai, Xu, Xiaohua, Zhang, Ningyu, Chen, Huajun, Jiang, Yuchen Eleanor
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While languag
Externí odkaz:
http://arxiv.org/abs/2406.18532
Autor:
Zhang, Wen, Xu, Yajing, Ye, Peng, Huang, Zhiwei, Xu, Zezhong, Chen, Jiaoyan, Pan, Jeff Z., Chen, Huajun
Knowledge graph (KG) completion aims to find out missing triples in a KG. Some tasks, such as link prediction and instance completion, have been proposed for KG completion. They are triple-level tasks with some elements in a missing triple given to p
Externí odkaz:
http://arxiv.org/abs/2406.18166
Autor:
Wang, Junjie, Chen, Mingyang, Hu, Binbin, Yang, Dan, Liu, Ziqi, Shen, Yue, Wei, Peng, Zhang, Zhiqiang, Gu, Jinjie, Zhou, Jun, Pan, Jeff Z., Zhang, Wen, Chen, Huajun
Improving the performance of large language models (LLMs) in complex question-answering (QA) scenarios has always been a research focal point. Recent studies have attempted to enhance LLMs' performance by combining step-wise planning with external re
Externí odkaz:
http://arxiv.org/abs/2406.14282
Instruction-based image editing has made a great process in using natural human language to manipulate the visual content of images. However, existing models are limited by the quality of the dataset and cannot accurately localize editing regions in
Externí odkaz:
http://arxiv.org/abs/2406.09973
Autor:
Feng, Kehua, Ding, Keyan, Wang, Weijie, Zhuang, Xiang, Wang, Zeyuan, Qin, Ming, Zhao, Yu, Yao, Jianhua, Zhang, Qiang, Chen, Huajun
The burgeoning utilization of Large Language Models (LLMs) in scientific research necessitates advanced benchmarks capable of evaluating their understanding and application of scientific knowledge comprehensively. To address this need, we introduce t
Externí odkaz:
http://arxiv.org/abs/2406.09098
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store
Externí odkaz:
http://arxiv.org/abs/2405.17969
Autor:
Zhang, Yichi, Chen, Zhuo, Guo, Lingbing, Xu, Yajing, Hu, Binbin, Liu, Ziqi, Zhang, Wen, Chen, Huajun
Multi-modal knowledge graph completion (MMKGC) aims to automatically discover new knowledge triples in the given multi-modal knowledge graphs (MMKGs), which is achieved by collaborative modeling the structural information concealed in massive triples
Externí odkaz:
http://arxiv.org/abs/2405.16869
Autor:
Qiao, Shuofei, Fang, Runnan, Zhang, Ningyu, Zhu, Yuqi, Chen, Xiang, Deng, Shumin, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun
Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in glob
Externí odkaz:
http://arxiv.org/abs/2405.14205
Autor:
Wang, Peng, Li, Zexi, Zhang, Ningyu, Xu, Ziwen, Yao, Yunzhi, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun
Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental ques
Externí odkaz:
http://arxiv.org/abs/2405.14768