Zobrazeno 1 - 10
of 4 053
pro vyhledávání: '"Yao, Feng"'
Autor:
Xiao, Chaojun, Zhang, Zhengyan, Song, Chenyang, Jiang, Dazhi, Yao, Feng, Han, Xu, Wang, Xiaozhi, Wang, Shuo, Huang, Yufei, Lin, Guanyu, Chen, Yingfa, Zhao, Weilin, Tu, Yuge, Zhong, Zexuan, Zhang, Ao, Si, Chenglei, Moo, Khai Hao, Zhao, Chenyang, Chen, Huimin, Lin, Yankai, Liu, Zhiyuan, Shang, Jingbo, Sun, Maosong
Advancements in LLMs have recently unveiled challenges tied to computational efficiency and continual scalability due to their requirements of huge parameters, making the applications and evolution of these models on devices with limited computation
Externí odkaz:
http://arxiv.org/abs/2409.02877
Autor:
Zhou, Yijie, Gong, Shufeng, Yao, Feng, Chen, Hanzhang, Yu, Song, Liu, Pengxi, Zhang, Yanfeng, Yu, Ge, Yu, Jeffrey Xu
Enhancing the efficiency of iterative computation on graphs has garnered considerable attention in both industry and academia. Nonetheless, the majority of efforts focus on expediting iterative computation by minimizing the running time per iteration
Externí odkaz:
http://arxiv.org/abs/2407.14544
The opacity in developing large language models (LLMs) is raising growing concerns about the potential contamination of public benchmarks in the pre-training data. Existing contamination detection methods are typically based on the text overlap betwe
Externí odkaz:
http://arxiv.org/abs/2406.13236
Controlling the attribute intensity of text generation is crucial across scenarios (e.g., writing conciseness, chatting emotion, and explanation clarity). The remarkable capabilities of large language models (LLMs) have revolutionized text generation
Externí odkaz:
http://arxiv.org/abs/2406.04460
Autor:
Gao, Xiaochen Kev, Yao, Feng, Zhao, Kewen, He, Beilei, Kumar, Animesh, Krishnan, Vish, Shang, Jingbo
Model scaling is becoming the default choice for many language tasks due to the success of large language models (LLMs). However, it can fall short in specific scenarios where simple customized methods excel. In this paper, we delve into the patent a
Externí odkaz:
http://arxiv.org/abs/2404.14372
Information extraction (IE) is a fundamental area in natural language processing where prompting large language models (LLMs), even with in-context examples, cannot defeat small LMs tuned on very small IE datasets. We observe that IE tasks, such as n
Externí odkaz:
http://arxiv.org/abs/2404.00457
Publikováno v:
CIKM 2023
Similar case retrieval (SCR) is a representative legal AI application that plays a pivotal role in promoting judicial fairness. However, existing SCR datasets only focus on the fact description section when judging the similarity between cases, ignor
Externí odkaz:
http://arxiv.org/abs/2310.15602
Autor:
Peng, Hao, Wang, Xiaozhi, Yao, Feng, Wang, Zimu, Zhu, Chuzhao, Zeng, Kaisheng, Hou, Lei, Li, Juanzi
Event understanding aims at understanding the content and relationship of events within texts, which covers multiple complicated information extraction tasks: event detection, event argument extraction, and event relation extraction. To facilitate re
Externí odkaz:
http://arxiv.org/abs/2309.14258