Zobrazeno 1 - 10
of 161
pro vyhledávání: '"Wang, Daling"'
Autor:
Wang, Ming, Liu, Yuanzhong, Liang, Xiaoyu, Huang, Yijie, Wang, Daling, Yang, Xiaocui, Shen, Sijia, Feng, Shi, Zhang, Xiaoming, Guan, Chaofeng, Zhang, Yifei
LLMs have demonstrated commendable performance across diverse domains. Nevertheless, formulating high-quality prompts to assist them in their work poses a challenge for non-AI experts. Existing research in prompt engineering suggests somewhat scatter
Externí odkaz:
http://arxiv.org/abs/2409.13449
Multi-hop Question Answering (QA) necessitates complex reasoning by integrating multiple pieces of information to resolve intricate questions. However, existing QA systems encounter challenges such as outdated information, context window length limit
Externí odkaz:
http://arxiv.org/abs/2408.11875
Publikováno v:
ECAI2024
Although large language models(LLMs) show amazing capabilities, among various exciting applications discovered for LLMs fall short in other low-resource languages. Besides, most existing methods depend on large-scale dialogue corpora and thus buildin
Externí odkaz:
http://arxiv.org/abs/2408.08724
Competitive debate is a complex task of computational argumentation. Large Language Models (LLMs) suffer from hallucinations and lack competitiveness in this field. To address these challenges, we introduce Agent for Debate (Agent4Debate), a dynamic
Externí odkaz:
http://arxiv.org/abs/2408.04472
Autor:
Zhang, Yiqun, Yang, Xiaocui, Xu, Xingle, Gao, Zeran, Huang, Yijie, Mu, Shiyi, Feng, Shi, Wang, Daling, Zhang, Yifei, Song, Kaisong, Yu, Ge
Affective Computing (AC), integrating computer science, psychology, and cognitive science knowledge, aims to enable machines to recognize, interpret, and simulate human emotions.To create more value, AC can be applied to diverse scenarios, including
Externí odkaz:
http://arxiv.org/abs/2408.04638
Autor:
Liu, Yongkang, Nie, Ercong, Feng, Shi, Hua, Zheng, Ding, Zifeng, Wang, Daling, Zhang, Yifei, Schütze, Hinrich
Publikováno v:
2024 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
Current state-of-the-art dialogue systems heavily rely on extensive training datasets. However, challenges arise in domains where domain-specific training datasets are insufficient or entirely absent. To tackle this challenge, we propose a novel data
Externí odkaz:
http://arxiv.org/abs/2406.09881
Autor:
Yang, Xiaocui, Wu, Wenfang, Feng, Shi, Wang, Ming, Wang, Daling, Li, Yang, Sun, Qi, Zhang, Yifei, Fu, Xiaoming, Poria, Soujanya
The rising popularity of multimodal large language models (MLLMs) has sparked a significant increase in research dedicated to evaluating these models. However, current evaluation studies predominantly concentrate on the ability of models to comprehen
Externí odkaz:
http://arxiv.org/abs/2405.07229
Autor:
Wang, Zihan, Kong, Fanheng, Feng, Shi, Wang, Ming, Yang, Xiaocui, Zhao, Han, Wang, Daling, Zhang, Yifei
In the realm of time series forecasting (TSF), it is imperative for models to adeptly discern and distill hidden patterns within historical time series data to forecast future states. Transformer-based models exhibit formidable efficacy in TSF, prima
Externí odkaz:
http://arxiv.org/abs/2403.11144
Autor:
Wang, Ming, Liu, Yuanzhong, Liang, Xiaoyu, Li, Songlian, Huang, Yijie, Zhang, Xiaoming, Shen, Sijia, Guan, Chaofeng, Wang, Daling, Feng, Shi, Zhang, Huaiwen, Zhang, Yifei, Zheng, Minghui, Zhang, Chi
LLMs have demonstrated commendable performance across diverse domains. Nevertheless, formulating high-quality prompts to instruct LLMs proficiently poses a challenge for non-AI experts. Existing research in prompt engineering suggests somewhat scatte
Externí odkaz:
http://arxiv.org/abs/2402.16929
Autor:
Liu, Yongkang, Zhang, Yiqun, Li, Qian, Liu, Tong, Feng, Shi, Wang, Daling, Zhang, Yifei, Schütze, Hinrich
Full-parameter fine-tuning has become the go-to choice for adapting language models (LMs) to downstream tasks due to its excellent performance. As LMs grow in size, fine-tuning the full parameters of LMs requires a prohibitively large amount of GPU m
Externí odkaz:
http://arxiv.org/abs/2401.15207