Zobrazeno 1 - 10
of 1 019
pro vyhledávání: '"GAO Shen"'
Publikováno v:
IEEE Access, Vol 9, Pp 54938-54950 (2021)
In this paper, we propose two deep-learning based uplink channel estimation approaches that can utilize not only high-resolution-ADC-quantized but also low-resolution-ADC-quantized received pilot signals to improve estimation performance for mixed an
Externí odkaz:
https://doaj.org/article/6e605e0c864147779eb6513aa46d906b
In hierarchical cognitive radio networks, edge or cloud servers utilize the data collected by edge devices for modulation classification, which, however, is faced with problems of the computation load, transmission overhead, and data privacy. In this
Externí odkaz:
http://arxiv.org/abs/2407.20772
Autor:
Huang, Chengrui, Shi, Zhengliang, Wen, Yuntao, Chen, Xiuying, Han, Peng, Gao, Shen, Shang, Shuo
Tool learning methods have enhanced the ability of large language models (LLMs) to interact with real-world applications. Many existing works fine-tune LLMs or design prompts to enable LLMs to select appropriate tools and correctly invoke them to mee
Externí odkaz:
http://arxiv.org/abs/2407.03007
Most economic theories typically assume that financial market participants are fully rational individuals and use mathematical models to simulate human behavior in financial markets. However, human behavior is often not entirely rational and is chall
Externí odkaz:
http://arxiv.org/abs/2406.19966
Multi-Hop Question Answering (MHQA) tasks present a significant challenge for large language models (LLMs) due to the intensive knowledge required. Current solutions, like Retrieval-Augmented Generation, typically retrieve potential documents from an
Externí odkaz:
http://arxiv.org/abs/2406.14891
Nowadays, neural text generation has made tremendous progress in abstractive summarization tasks. However, most of the existing summarization models take in the whole document all at once, which sometimes cannot meet the needs in practice. Practicall
Externí odkaz:
http://arxiv.org/abs/2406.05361
Autor:
Chen, Xiuying, Li, Mingzhe, Gao, Shen, Cheng, Xin, Zhu, Qingqing, Yan, Rui, Gao, Xin, Zhang, Xiangliang
A proficient summarization model should exhibit both flexibility -- the capacity to handle a range of in-domain summarization tasks, and adaptability -- the competence to acquire new knowledge and adjust to unseen out-of-domain tasks. Unlike large la
Externí odkaz:
http://arxiv.org/abs/2406.05360
Autor:
Shi, Zhengliang, Gao, Shen, Chen, Xiuyi, Feng, Yue, Yan, Lingyong, Shi, Haibo, Yin, Dawei, Chen, Zhumin, Verberne, Suzan, Ren, Zhaochun
Augmenting large language models (LLMs) with external tools has emerged as a promising approach to extend their utility, empowering them to solve practical tasks. Existing work typically empowers LLMs as tool users with a manually designed workflow,
Externí odkaz:
http://arxiv.org/abs/2405.16533
Recommendation systems play a crucial role in various domains, suggesting items based on user behavior.However, the lack of transparency in presenting recommendations can lead to user confusion. In this paper, we introduce Data-level Recommendation E
Externí odkaz:
http://arxiv.org/abs/2404.06311
Large language model agents have demonstrated remarkable advancements across various complex tasks. Recent works focus on optimizing the agent team or employing self-reflection to iteratively solve complex tasks. Since these agents are all based on t
Externí odkaz:
http://arxiv.org/abs/2404.05569