Zobrazeno 1 - 10
of 315
pro vyhledávání: '"Chen, Siheng"'
Autor:
Huang, Xun, Wang, Jinlong, Xia, Qiming, Chen, Siheng, Yang, Bisheng, Wang, Cheng, Wen, Chenglu
Current Vehicle-to-Everything (V2X) systems have significantly enhanced 3D object detection using LiDAR and camera data. However, these methods suffer from performance degradation in adverse weather conditions. The weatherrobust 4D radar provides Dop
Externí odkaz:
http://arxiv.org/abs/2411.08402
Autor:
Hu, Yue, Cai, Yuzhu, Du, Yaxin, Zhu, Xinyu, Liu, Xiangrui, Yu, Zijie, Hou, Yuchen, Tang, Shuo, Chen, Siheng
LLM-driven multi-agent collaboration (MAC) systems have demonstrated impressive capabilities in automatic software development at the function level. However, their heavy reliance on human design limits their adaptability to the diverse demands of re
Externí odkaz:
http://arxiv.org/abs/2410.16946
Autor:
Tang, Shuo, Pang, Xianghe, Liu, Zexi, Tang, Bohan, Ye, Rui, Dong, Xiaowen, Wang, Yanfeng, Chen, Siheng
Post-training is essential for enabling large language models (LLMs) to follow human instructions. Inspired by the recent success of using LLMs to simulate human society, we leverage multi-agent simulation to automatically generate diverse text-based
Externí odkaz:
http://arxiv.org/abs/2410.14251
By leveraging massively distributed data, federated learning (FL) enables collaborative instruction tuning of large language models (LLMs) in a privacy-preserving way. While FL effectively expands the data quantity, the issue of data quality remains
Externí odkaz:
http://arxiv.org/abs/2410.11540
The success of large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data. However, this practice raises privacy concerns due to the memorization of LLMs. Existing solutions, such as utilizing synthetic data for
Externí odkaz:
http://arxiv.org/abs/2410.05725
Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with server-side public data for instruction augmentation, ultimately boosting model performance within specific domains. To date, the factors a
Externí odkaz:
http://arxiv.org/abs/2409.20135
Federated instruction tuning enables multiple clients to collaboratively fine-tune a shared large language model (LLM) that can follow humans' instructions without directly sharing raw data. However, existing literature impractically requires that al
Externí odkaz:
http://arxiv.org/abs/2409.07136
Collaborative perception has garnered considerable attention due to its capacity to address several inherent challenges in single-agent perception, including occlusion and out-of-range issues. However, existing collaborative perception systems heavil
Externí odkaz:
http://arxiv.org/abs/2406.12712
Federated learning (FL) enables multiple parties to collaboratively fine-tune an large language model (LLM) without the need of direct data sharing. Ideally, by training on decentralized data that is aligned with human preferences and safety principl
Externí odkaz:
http://arxiv.org/abs/2406.10630
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities for tasks such as social networks and medical data analysis. Despite their successes, GNNs are vulnerable to adver
Externí odkaz:
http://arxiv.org/abs/2406.07917