Zobrazeno 1 - 10
of 1 279
pro vyhledávání: '"LI Xiaoya"'
Autor:
Wang, Shuhe, Zhang, Shengyu, Zhang, Jie, Hu, Runyi, Li, Xiaoya, Zhang, Tianwei, Li, Jiwei, Wu, Fei, Wang, Guoyin, Hovy, Eduard
This paper surveys research in the rapidly growing field of enhancing large language models (LLMs) with reinforcement learning (RL), a technique that enables LLMs to improve their performance by receiving feedback in the form of rewards based on the
Externí odkaz:
http://arxiv.org/abs/2412.10400
Autor:
Wang, Shuhe, Cao, Beiming, Zhang, Shengyu, Li, Xiaoya, Li, Jiwei, Wu, Fei, Wang, Guoyin, Hovy, Eduard
Due to the lack of a large collection of high-quality labeled sentence pairs with textual similarity scores, existing approaches for Semantic Textual Similarity (STS) mostly rely on unsupervised techniques or training signals that are only partially
Externí odkaz:
http://arxiv.org/abs/2312.05603
Autor:
Sun, Xiaofei, Li, Xiaoya, Zhang, Shengyu, Wang, Shuhe, Wu, Fei, Li, Jiwei, Zhang, Tianwei, Wang, Guoyin
A standard paradigm for sentiment analysis is to rely on a singular LLM and makes the decision in a single round under the framework of in-context learning. This framework suffers the key disadvantage that the single-turn output generated by a single
Externí odkaz:
http://arxiv.org/abs/2311.01876
Autor:
Zhang, Shengyu, Dong, Linfeng, Li, Xiaoya, Zhang, Sen, Sun, Xiaofei, Wang, Shuhe, Li, Jiwei, Hu, Runyi, Zhang, Tianwei, Wu, Fei, Wang, Guoyin
This paper surveys research works in the quickly advancing field of instruction tuning (IT), which can also be referred to as supervised fine-tuning (SFT)\footnote{In this paper, unless specified otherwise, supervised fine-tuning (SFT) and instructio
Externí odkaz:
http://arxiv.org/abs/2308.10792
Autor:
Sun, Xiaofei, Dong, Linfeng, Li, Xiaoya, Wan, Zhen, Wang, Shuhe, Zhang, Tianwei, Li, Jiwei, Cheng, Fei, Lyu, Lingjuan, Wu, Fei, Wang, Guoyin
Despite the success of ChatGPT, its performances on most NLP tasks are still well below the supervised baselines. In this work, we looked into the causes, and discovered that its subpar performance was caused by the following factors: (1) token limit
Externí odkaz:
http://arxiv.org/abs/2306.09719
Autor:
Wang, Chuan, Liang, Yihan, Hu, Ronghao, He, Kai, Gao, Guilong, Yan, Xin, Yao, Dong, Wang, Tao, Li, Xiaoya, Tian, Jinshou, Zhu, Wenjun, Lv, Meng
Microscale imaging of mesoscale bulk materials under dynamic compression is important for understanding their properties. In this work, we study the effects of the depth of field (DoF) and field of view (FoV) of the optical lens and extract the scatt
Externí odkaz:
http://arxiv.org/abs/2306.08948
Millimeter wave (mmWave)-based unmanned aerial vehicle (UAV) communication is a promising candidate for future communications due to its flexibility and sufficient bandwidth. However, random fluctuations in the position of hovering UAVs will lead to
Externí odkaz:
http://arxiv.org/abs/2306.06405
Autor:
Cui, Jinchuan, Li, Xiaoya
The airplane refueling problem is a nonlinear combinatorial optimization problem, and its equivalent problem the $n$-vehicle exploration problem is proved to be NP-complete (arXiv:2304.03965v1, The $n$-vehicle exploration problem is NP-complete). In
Externí odkaz:
http://arxiv.org/abs/2305.12478
Despite the remarkable success of large-scale Language Models (LLMs) such as GPT-3, their performances still significantly underperform fine-tuned models in the task of text classification. This is due to (1) the lack of reasoning ability in addressi
Externí odkaz:
http://arxiv.org/abs/2305.08377
Autor:
Wang, Shuhe, Sun, Xiaofei, Li, Xiaoya, Ouyang, Rongbin, Wu, Fei, Zhang, Tianwei, Li, Jiwei, Wang, Guoyin
Despite the fact that large-scale Language Models (LLM) have achieved SOTA performances on a variety of NLP tasks, its performance on NER is still significantly below supervised baselines. This is due to the gap between the two tasks the NER and LLMs
Externí odkaz:
http://arxiv.org/abs/2304.10428