Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Autor: | Wang, Mengru, Yao, Yunzhi, Xu, Ziwen, Qiao, Shuofei, Deng, Shumin, Wang, Peng, Chen, Xiang, Gu, Jia-Chen, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun, Zhang, Ningyu |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Understanding knowledge mechanisms in Large Language Models (LLMs) is crucial for advancing towards trustworthy AGI. This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution. Knowledge utilization delves into the mechanism of memorization, comprehension and application, and creation. Knowledge evolution focuses on the dynamic progression of knowledge within individual and group LLMs. Moreover, we discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address. We hope this work can help understand knowledge in LLMs and provide insights for future research. Comment: EMNLP 2024 Findings; 39 pages (v3) |
Databáze: | arXiv |
Externí odkaz: |