Zobrazeno 1 - 10
of 90
pro vyhledávání: '"Yin, Yichun"'
Autor:
Li, Xiangyang, Dong, Kuicai, Lee, Yi Quan, Xia, Wei, Yin, Yichun, Zhang, Hao, Liu, Yong, Wang, Yasheng, Tang, Ruiming
Despite the substantial success of Information Retrieval (IR) in various NLP tasks, most IR systems predominantly handle queries and corpora in natural language, neglecting the domain of code retrieval. Code retrieval is critically important yet rema
Externí odkaz:
http://arxiv.org/abs/2407.02883
Autor:
Pan, Yu, Yuan, Ye, Yin, Yichun, Shi, Jiaxin, Xu, Zenglin, Zhang, Ming, Shang, Lifeng, Jiang, Xin, Liu, Qun
The rapid progress of Transformers in artificial intelligence has come at the cost of increased resource consumption and greenhouse gas emissions due to growing model sizes. Prior work suggests using pretrained small models to improve training effici
Externí odkaz:
http://arxiv.org/abs/2401.09192
Training large models from scratch usually costs a substantial amount of resources. Towards this problem, recent studies such as bert2BERT and LiGO have reused small pretrained models to initialize a large model (termed the ``target model''), leading
Externí odkaz:
http://arxiv.org/abs/2310.10699
Autor:
Xiong, Jing, Shen, Jianhao, Yuan, Ye, Wang, Haiming, Yin, Yichun, Liu, Zhengying, Li, Lin, Guo, Zhijiang, Cao, Qingxing, Huang, Yinya, Zheng, Chuanyang, Liang, Xiaodan, Zhang, Ming, Liu, Qun
Automated theorem proving (ATP) has become an appealing domain for exploring the reasoning ability of the recent successful generative language models. However, current ATP benchmarks mainly focus on symbolic inference, but rarely involve the underst
Externí odkaz:
http://arxiv.org/abs/2310.10180
Autor:
Xiong, Jing, Li, Zixuan, Zheng, Chuanyang, Guo, Zhijiang, Yin, Yichun, Xie, Enze, Yang, Zhicheng, Cao, Qingxing, Wang, Haiming, Han, Xiongwei, Tang, Jing, Li, Chengming, Liang, Xiaodan
Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involv
Externí odkaz:
http://arxiv.org/abs/2310.02954
Autor:
Liu, Chengwu, Shen, Jianhao, Xin, Huajian, Liu, Zhengying, Yuan, Ye, Wang, Haiming, Ju, Wei, Zheng, Chuanyang, Yin, Yichun, Li, Lin, Zhang, Ming, Liu, Qun
We present FIMO, an innovative dataset comprising formal mathematical problem statements sourced from the International Mathematical Olympiad (IMO) Shortlisted Problems. Designed to facilitate advanced automated theorem proving at the IMO level, FIMO
Externí odkaz:
http://arxiv.org/abs/2309.04295
Autor:
Li, Siheng, Yang, Cheng, Yin, Yichun, Zhu, Xinyu, Cheng, Zesen, Shang, Lifeng, Jiang, Xin, Liu, Qun, Yang, Yujiu
Information-seeking conversation, which aims to help users gather information through conversation, has achieved great progress in recent years. However, the research is still stymied by the scarcity of training data. To alleviate this problem, we pr
Externí odkaz:
http://arxiv.org/abs/2308.06507
Autor:
Li, Siheng, Yin, Yichun, Yang, Cheng, Jiang, Wangjie, Li, Yiwei, Cheng, Zesen, Shang, Lifeng, Jiang, Xin, Liu, Qun, Yang, Yujiu
Hot news is one of the most popular topics in daily conversations. However, news grounded conversation has long been stymied by the lack of well-designed task definition and scarce data. In this paper, we propose a novel task, Proactive News Grounded
Externí odkaz:
http://arxiv.org/abs/2308.06501
Autor:
Wan, Zhongwei, Yin, Yichun, Zhang, Wei, Shi, Jiaxin, Shang, Lifeng, Chen, Guangyong, Jiang, Xin, Liu, Qun
Recently, domain-specific PLMs have been proposed to boost the task performance of specific domains (e.g., biomedical and computer science) by continuing to pre-train general PLMs with domain-specific corpora. However, this Domain-Adaptive Pre-Traini
Externí odkaz:
http://arxiv.org/abs/2212.03613
Recently, prompt tuning (PT) has gained increasing attention as a parameter-efficient way of tuning pre-trained language models (PLMs). Despite extensively reducing the number of tunable parameters and achieving satisfying performance, PT is training
Externí odkaz:
http://arxiv.org/abs/2211.06840