Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Tang, Zeyun"'
Benefiting from the revolutionary advances in large language models (LLMs) and foundational vision models, large vision-language models (LVLMs) have also made significant progress. However, current benchmarks focus on tasks that evaluating only a sin
Externí odkaz:
http://arxiv.org/abs/2410.12564
Autor:
Yang, Hongkang, Lin, Zehao, Wang, Wenjin, Wu, Hao, Li, Zhiyu, Tang, Bo, Wei, Wenqiang, Wang, Jinbo, Tang, Zeyun, Song, Shichao, Xi, Chenyang, Yu, Yu, Chen, Kai, Xiong, Feiyu, Tang, Linpeng, E, Weinan
The training and inference of large language models (LLMs) are together a costly process that transports knowledge from raw data to meaningful computation. Inspired by the memory hierarchy of the human brain, we reduce this cost by equipping LLMs wit
Externí odkaz:
http://arxiv.org/abs/2407.01178
Autor:
Chen, Yanfang, Chen, Ding, Song, Shichao, Niu, Simin, Wang, Hanyu, Tang, Zeyun, Xiong, Feiyu, Li, Zhiyu
As people increasingly prioritize their health, the speed and breadth of health information dissemination on the internet have also grown. At the same time, the presence of false health information (health rumors) intermingled with genuine content po
Externí odkaz:
http://arxiv.org/abs/2407.00668
Multi-hop reading comprehension across multiple documents attracts much attention recently. In this paper, we propose a novel approach to tackle this multi-hop reading comprehension problem. Inspired by human reasoning processing, we construct a path
Externí odkaz:
http://arxiv.org/abs/2006.06478