Zobrazeno 1 - 10
of 407
pro vyhledávání: '"Li Juanzi"'
Autor:
Zhang-Li, Daniel, Zhang, Zheyuan, Yu, Jifan, Yin, Joy Lim Jia, Tu, Shangqing, Gong, Linlu, Wang, Haohua, Liu, Zhiyuan, Liu, Huiqin, Hou, Lei, Li, Juanzi
The vast pre-existing slides serve as rich and important materials to carry lecture knowledge. However, effectively leveraging lecture slides to serve students is difficult due to the multi-modal nature of slide content and the heterogeneous teaching
Externí odkaz:
http://arxiv.org/abs/2409.07372
Autor:
Yu, Jifan, Zhang, Zheyuan, Zhang-li, Daniel, Tu, Shangqing, Hao, Zhanxin, Li, Rui Miao, Li, Haoxuan, Wang, Yuanchun, Li, Hanming, Gong, Linlu, Cao, Jie, Lin, Jiayin, Zhou, Jinchang, Qin, Fei, Wang, Haohua, Jiang, Jianxiao, Deng, Lijun, Zhan, Yisi, Xiao, Chaojun, Dai, Xusheng, Yan, Xuan, Lin, Nianyi, Zhang, Nan, Ni, Ruixin, Dang, Yang, Hou, Lei, Zhang, Yu, Han, Xu, Li, Manli, Li, Juanzi, Liu, Zhiyuan, Liu, Huiqin, Sun, Maosong
Since the first instances of online education, where courses were uploaded to accessible and shared online platforms, this form of scaling the dissemination of human knowledge to reach a broader audience has sparked extensive discussion and widesprea
Externí odkaz:
http://arxiv.org/abs/2409.03512
Autor:
Zhang, Jiajie, Bai, Yushi, Lv, Xin, Gu, Wanjun, Liu, Danqing, Zou, Minhao, Cao, Shulin, Hou, Lei, Dong, Yuxiao, Feng, Ling, Li, Juanzi
Though current long-context large language models (LLMs) have demonstrated impressive capacities in answering user questions based on extensive text, the lack of citations in their responses makes user verification difficult, leading to concerns abou
Externí odkaz:
http://arxiv.org/abs/2409.02897
Autor:
Hong, Wenyi, Wang, Weihan, Ding, Ming, Yu, Wenmeng, Lv, Qingsong, Wang, Yan, Cheng, Yean, Huang, Shiyu, Ji, Junhui, Xue, Zhao, Zhao, Lei, Yang, Zhuoyi, Gu, Xiaotao, Zhang, Xiaohan, Feng, Guanyu, Yin, Da, Wang, Zihan, Qi, Ji, Song, Xixuan, Zhang, Peng, Liu, Debing, Xu, Bin, Li, Juanzi, Dong, Yuxiao, Tang, Jie
Beginning with VisualGLM and CogVLM, we are continuously exploring VLMs in pursuit of enhanced vision-language fusion, efficient higher-resolution architecture, and broader modalities and applications. Here we propose the CogVLM2 family, a new genera
Externí odkaz:
http://arxiv.org/abs/2408.16500
Autor:
Bai, Yushi, Zhang, Jiajie, Lv, Xin, Zheng, Linzhi, Zhu, Siqi, Hou, Lei, Dong, Yuxiao, Tang, Jie, Li, Juanzi
Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words. Through controlled experiments, we find that the model's effective generation l
Externí odkaz:
http://arxiv.org/abs/2408.07055
Future event prediction (FEP) is a long-standing and crucial task in the world, as understanding the evolution of events enables early risk identification, informed decision-making, and strategic planning. Existing work typically treats event predict
Externí odkaz:
http://arxiv.org/abs/2408.06578
Event Factuality Detection (EFD) task determines the factuality of textual events, i.e., classifying whether an event is a fact, possibility, or impossibility, which is essential for faithfully understanding and utilizing event knowledge. However, du
Externí odkaz:
http://arxiv.org/abs/2407.15352
Autor:
Xin, Amy, Qi, Yunjia, Yao, Zijun, Zhu, Fangwei, Zeng, Kaisheng, Bin, Xu, Hou, Lei, Li, Juanzi
Entity Linking (EL) models are well-trained at mapping mentions to their corresponding entities according to a given context. However, EL models struggle to disambiguate long-tail entities due to their limited training data. Meanwhile, large language
Externí odkaz:
http://arxiv.org/abs/2407.04020
Large Language Models (LLMs) have shown significant promise as copilots in various tasks. Local deployment of LLMs on edge devices is necessary when handling privacy-sensitive data or latency-sensitive tasks. The computational constraints of such dev
Externí odkaz:
http://arxiv.org/abs/2406.19227
Autor:
Zhang, Zheyuan, Zhang-Li, Daniel, Yu, Jifan, Gong, Linlu, Zhou, Jinchang, Liu, Zhiyuan, Hou, Lei, Li, Juanzi
Large language models (LLMs) have been employed in various intelligent educational tasks to assist teaching. While preliminary explorations have focused on independent LLM-empowered agents for specific educational tasks, the potential for LLMs within
Externí odkaz:
http://arxiv.org/abs/2406.19226