Zobrazeno 1 - 10
of 15 576
pro vyhledávání: '"Jiang , Feng"'
Publikováno v:
Nature Communications 15, 8573 (2024)
The idea that a material can exhibit negative compressibility is highly consequential for research and applications. As new forms for this effect are discovered, it is important to examine the range of possible mechanisms and ways to design them into
Externí odkaz:
http://arxiv.org/abs/2410.07489
Autor:
Zhang, Chen, Chong, Dading, Jiang, Feng, Tang, Chengguang, Gao, Anningzhe, Tang, Guohua, Li, Haizhou
In natural human-to-human conversations, participants often receive feedback signals from one another based on their follow-up reactions. These reactions can include verbal responses, facial expressions, changes in emotional state, and other non-verb
Externí odkaz:
http://arxiv.org/abs/2409.13948
Brain decoding that classifies cognitive states using the functional fluctuations of the brain can provide insightful information for understanding the brain mechanisms of cognitive functions. Among the common procedures of decoding the brain cogniti
Externí odkaz:
http://arxiv.org/abs/2407.08174
Autor:
Xie, Wenya, Xiao, Qingying, Zheng, Yu, Wang, Xidong, Chen, Junying, Ji, Ke, Gao, Anningzhe, Wan, Xiang, Jiang, Feng, Wang, Benyou
The recent success of Large Language Models (LLMs) has had a significant impact on the healthcare field, providing patients with medical advice, diagnostic information, and more. However, due to a lack of professional medical knowledge, patients are
Externí odkaz:
http://arxiv.org/abs/2406.18034
Data selection for fine-tuning Large Language Models (LLMs) aims to select a high-quality subset from a given candidate dataset to train a Pending Fine-tune Model (PFM) into a Selective-Enhanced Model (SEM). It can improve the model performance and a
Externí odkaz:
http://arxiv.org/abs/2406.14115
Autor:
Zhang, Chen, Tang, Chengguang, Chong, Dading, Shi, Ke, Tang, Guohua, Jiang, Feng, Li, Haizhou
Mainstream approaches to aligning large language models (LLMs) heavily rely on human preference data, particularly when models require periodic updates. The standard process for iterative alignment of LLMs involves collecting new human feedback for e
Externí odkaz:
http://arxiv.org/abs/2405.20215
The advancement of large language models (LLMs) has propelled the development of dialogue systems. Unlike the popular ChatGPT-like assistant model, which only satisfies the user's preferences, task-oriented dialogue systems have also faced new requir
Externí odkaz:
http://arxiv.org/abs/2405.19799
No-Reference Image Quality Assessment (NR-IQA) aims at estimating image quality in accordance with subjective human perception. However, most methods focus on exploring increasingly complex networks to improve the final performance,accompanied by lim
Externí odkaz:
http://arxiv.org/abs/2404.17170
Convolutional Neural Network (CNN) and Transformer have attracted much attention recently for video post-processing (VPP). However, the interaction between CNN and Transformer in existing VPP methods is not fully explored, leading to inefficient comm
Externí odkaz:
http://arxiv.org/abs/2404.14709
$R^3$: 'This is My SQL, Are You With Me?' A Consensus-Based Multi-Agent System for Text-to-SQL Tasks
Autor:
Xia, Hanchen, Jiang, Feng, Deng, Naihao, Wang, Cunxiang, Zhao, Guojiang, Mihalcea, Rada, Zhang, Yue
Large Language Models (LLMs) have demonstrated strong performance on various tasks. To unleash their power on the Text-to-SQL task, we propose $R^3$ (Review-Rebuttal-Revision), a consensus-based multi-agent system for Text-to-SQL tasks. $R^3$ outperf
Externí odkaz:
http://arxiv.org/abs/2402.14851