Zobrazeno 1 - 10
of 9 295
pro vyhledávání: '"TANG, BO"'
Influence maximization (IM) is a classic problem that aims to identify a small group of critical individuals, known as seeds, who can influence the largest number of users in a social network through word-of-mouth. This problem finds important applic
Externí odkaz:
http://arxiv.org/abs/2410.16603
Retrieval-Augmented Generation (RAG), while serving as a viable complement to large language models (LLMs), often overlooks the crucial aspect of text chunking within its pipeline, which impacts the quality of knowledge-intensive tasks. This paper in
Externí odkaz:
http://arxiv.org/abs/2410.12788
Mixed-integer non-linear programs (MINLPs) arise in various domains, such as energy systems and transportation, but are notoriously difficult to solve. Recent advances in machine learning have led to remarkable successes in optimization tasks, an are
Externí odkaz:
http://arxiv.org/abs/2410.11061
Autor:
Zheng, Zifan, Wang, Yezhaohui, Huang, Yuxin, Song, Shichao, Yang, Mingchuan, Tang, Bo, Xiong, Feiyu, Li, Zhiyu
Since the advent of ChatGPT, Large Language Models (LLMs) have excelled in various tasks but remain as black-box systems. Consequently, the reasoning bottlenecks of LLMs are mainly influenced by their internal architecture. As a result, many research
Externí odkaz:
http://arxiv.org/abs/2409.03752
Autor:
Wu, Yiquan, Tang, Bo, Xi, Chenyang, Yu, Yu, Wang, Pengyu, Liu, Yifei, Kuang, Kun, Deng, Haiying, Li, Zhiyu, Xiong, Feiyu, Hu, Jie, Cheng, Peng, Wang, Zhonghao, Wang, Yi, Luo, Yi, Yang, Mingchuan
Commentary provides readers with a deep understanding of events by presenting diverse arguments and evidence. However, creating commentary is a time-consuming task, even for skilled commentators. Large language models (LLMs) have simplified the proce
Externí odkaz:
http://arxiv.org/abs/2408.11609
Autor:
Yang, Hongkang, Lin, Zehao, Wang, Wenjin, Wu, Hao, Li, Zhiyu, Tang, Bo, Wei, Wenqiang, Wang, Jinbo, Tang, Zeyun, Song, Shichao, Xi, Chenyang, Yu, Yu, Chen, Kai, Xiong, Feiyu, Tang, Linpeng, E, Weinan
The training and inference of large language models (LLMs) are together a costly process that transports knowledge from raw data to meaningful computation. Inspired by the memory hierarchy of the human brain, we reduce this cost by equipping LLMs wit
Externí odkaz:
http://arxiv.org/abs/2407.01178
Autor:
Zhu, Junyi, Liu, Shuochen, Yu, Yu, Tang, Bo, Yan, Yibo, Li, Zhiyu, Xiong, Feiyu, Xu, Tong, Blaschko, Matthew B.
Large language models (LLMs) excel in generating coherent text, but they often struggle with context awareness, leading to inaccuracies in tasks requiring faithful adherence to provided information. We introduce FastMem, a novel method designed to en
Externí odkaz:
http://arxiv.org/abs/2406.16069
Offline reinforcement learning (RL) can learn optimal policies from pre-collected offline datasets without interacting with the environment, but the sampled actions of the agent cannot often cover the action distribution under a given state, resultin
Externí odkaz:
http://arxiv.org/abs/2406.09089
The rapid advancement of 5G networks and the upcoming transition to 6G necessitate the use of the Open Radio Access Network (O-RAN) architecture to enable greater flexibility, interoperability, and innovation. This shift towards 6G and O-RAN requires
Externí odkaz:
http://arxiv.org/abs/2405.19480
Autor:
Liang, Xun, Niu, Simin, li, Zhiyu, Zhang, Sensen, Song, Shichao, Wang, Hanyu, Yang, Jiawei, Xiong, Feiyu, Tang, Bo, Xi, Chenyang
Retrieval-Augmented Generation (RAG) offers a cost-effective approach to injecting real-time knowledge into large language models (LLMs). Nevertheless, constructing and validating high-quality knowledge repositories require considerable effort. We pr
Externí odkaz:
http://arxiv.org/abs/2405.16933