Zobrazeno 1 - 10
of 45 968
pro vyhledávání: '"Wu,Wei"'
Large language models (LLMs) have become integral tool for users from various backgrounds. LLMs, trained on vast corpora, reflect the linguistic and cultural nuances embedded in their pre-training data. However, the values and perspectives inherent i
Externí odkaz:
http://arxiv.org/abs/2410.11647
Publikováno v:
2024 KDD Cup Workshop for Retrieval Augmented Generation at the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Hallucination is a key roadblock for applications of Large Language Models (LLMs), particularly for enterprise applications that are sensitive to information accuracy. To address this issue, two general approaches have been explored: Retrieval-Augmen
Externí odkaz:
http://arxiv.org/abs/2410.09699
Autor:
Wu, Wei, Zheng, Kecheng, Ma, Shuailei, Lu, Fan, Guo, Yuxin, Zhang, Yifei, Chen, Wei, Guo, Qingpei, Shen, Yujun, Zha, Zheng-Jun
Understanding long text is of great demands in practice but beyond the reach of most language-image pre-training (LIP) models. In this work, we empirically confirm that the key reason causing such an issue is that the training images are usually pair
Externí odkaz:
http://arxiv.org/abs/2410.05249
Autor:
Wu, Wei, Wang, Chao, Chen, Liyi, Yin, Mingze, Zhu, Yiheng, Fu, Kun, Ye, Jieping, Xiong, Hui, Wang, Zheng
Proteins, as essential biomolecules, play a central role in biological processes, including metabolic reactions and DNA replication. Accurate prediction of their properties and functions is crucial in biological applications. Recent development of pr
Externí odkaz:
http://arxiv.org/abs/2410.03553
Modeling the nonlinear dynamics of neuronal populations represents a key pursuit in computational neuroscience. Recent research has increasingly focused on jointly modeling neural activity and behavior to unravel their interconnections. Despite signi
Externí odkaz:
http://arxiv.org/abs/2410.13872
Recently, retrieval-based language models (RLMs) have received much attention. However, most of them leverage a pre-trained retriever with fixed parameters, which may not adapt well to causal language models. In this work, we propose Grouped Cross-At
Externí odkaz:
http://arxiv.org/abs/2410.01651
Tables are ubiquitous across various domains for concisely representing structured information. Empowering large language models (LLMs) to reason over tabular data represents an actively explored direction. However, since typical LLMs only support on
Externí odkaz:
http://arxiv.org/abs/2409.19700
Autor:
Zhu, Zehao, Sun, Wei, Jia, Jun, Wu, Wei, Deng, Sibin, Li, Kai, Chen, Ying, Min, Xiongkuo, Wang, Jia, Zhai, Guangtao
In recent years, live video streaming has gained widespread popularity across various social media platforms. Quality of experience (QoE), which reflects end-users' satisfaction and overall experience, plays a critical role for media service provider
Externí odkaz:
http://arxiv.org/abs/2409.17596
Despite the remarkable success of large language models (LLMs) on traditional natural language processing tasks, their planning ability remains a critical bottleneck in tackling complex multi-step reasoning tasks. Existing approaches mainly rely on p
Externí odkaz:
http://arxiv.org/abs/2409.12452
Current end-to-end autonomous driving methods resort to unifying modular designs for various tasks (e.g. perception, prediction and planning). Although optimized in a planning-oriented spirit with a fully differentiable framework, existing end-to-end
Externí odkaz:
http://arxiv.org/abs/2409.09777