Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Heng, Yuzhao"'
The recent emergence of Large Language Models (LLMs) has heralded a new era of human-AI interaction. These sophisticated models, exemplified by Chat-GPT and its successors, have exhibited remarkable capabilities in language understanding. However, as
Externí odkaz:
http://arxiv.org/abs/2407.18078
Autor:
Deng, Chunyuan, Zhao, Yilun, Heng, Yuzhao, Li, Yitong, Cao, Jiannan, Tang, Xiangru, Cohan, Arman
Data contamination has garnered increased attention in the era of large language models (LLMs) due to the reliance on extensive internet-derived training corpora. The issue of training corpus overlap with evaluation benchmarks--referred to as contami
Externí odkaz:
http://arxiv.org/abs/2406.14644
Although Large Language Models (LLMs) exhibit remarkable adaptability across domains, these models often fall short in structured knowledge extraction tasks such as named entity recognition (NER). This paper explores an innovative, cost-efficient str
Externí odkaz:
http://arxiv.org/abs/2403.11103
Autor:
Clarke, Christopher, Heng, Yuzhao, Kang, Yiping, Flautner, Krisztian, Tang, Lingjia, Mars, Jason
Conventional approaches to text classification typically assume the existence of a fixed set of predefined labels to which a given text can be classified. However, in real-world applications, there exists an infinite label space for describing a give
Externí odkaz:
http://arxiv.org/abs/2305.16521