Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Ying, Jiahao"'
Autor:
Tang, Wei, Cao, Yixin, Deng, Yang, Ying, Jiahao, Wang, Bo, Yang, Yizhe, Zhao, Yuyue, Zhang, Qi, Huang, Xuanjing, Jiang, Yugang, Liao, Yong
Knowledge utilization is a critical aspect of LLMs, and understanding how they adapt to evolving knowledge is essential for their effective deployment. However, existing benchmarks are predominantly static, failing to capture the evolving nature of L
Externí odkaz:
http://arxiv.org/abs/2412.13582
Large Language Models (LLMs) are versatile and demonstrate impressive generalization ability by mining and learning information from extensive unlabeled text. However, they still exhibit reasoning mistakes, often stemming from knowledge deficiencies,
Externí odkaz:
http://arxiv.org/abs/2408.11431
Autor:
Ying, Jiahao, Lin, Mingbao, Cao, Yixin, Tang, Wei, Wang, Bo, Sun, Qianru, Huang, Xuanjing, Yan, Shuicheng
This paper introduces the innovative "LLMs-as-Instructors" framework, which leverages the advanced Large Language Models (LLMs) to autonomously enhance the training of smaller target models. Inspired by the theory of "Learning from Errors", this fram
Externí odkaz:
http://arxiv.org/abs/2407.00497
While large language models (LLMs) have made notable advancements in natural language processing, they continue to struggle with processing extensive text. Memory mechanism offers a flexible solution for managing long contexts, utilizing techniques s
Externí odkaz:
http://arxiv.org/abs/2406.13167
Retrieval-Augmented Generation (RAG) is an effective solution to supplement necessary knowledge to large language models (LLMs). Targeting its bottleneck of retriever performance, "generate-then-read" pipeline is proposed to replace the retrieval sta
Externí odkaz:
http://arxiv.org/abs/2406.03963
Autor:
Ying, Jiahao, Cao, Yixin, Bai, Yushi, Sun, Qianru, Wang, Bo, Tang, Wei, Ding, Zhaojun, Yang, Yizhe, Huang, Xuanjing, Yan, Shuicheng
Large language models (LLMs) have achieved impressive performance across various natural language benchmarks, prompting a continual need to curate more difficult datasets for larger LLMs, which is costly and time-consuming. In this paper, we propose
Externí odkaz:
http://arxiv.org/abs/2402.11894
This study investigates the behaviors of Large Language Models (LLMs) when faced with conflicting prompts versus their internal memory. This will not only help to understand LLMs' decision mechanism but also benefit real-world applications, such as r
Externí odkaz:
http://arxiv.org/abs/2309.17415
Autor:
Bai, Yushi, Ying, Jiahao, Cao, Yixin, Lv, Xin, He, Yuze, Wang, Xiaozhi, Yu, Jifan, Zeng, Kaisheng, Xiao, Yijia, Lyu, Haozhe, Zhang, Jiayin, Li, Juanzi, Hou, Lei
Numerous benchmarks have been established to assess the performance of foundation models on open-ended question answering, which serves as a comprehensive test of a model's ability to understand and generate language in a manner similar to humans. Mo
Externí odkaz:
http://arxiv.org/abs/2306.04181
Publikováno v:
In International Immunopharmacology November 2023 124 Part A
Publikováno v:
In Surfaces and Interfaces February 2023 36