Zobrazeno 1 - 10
of 22 749
pro vyhledávání: '"Yaxin, An"'
Autor:
Kurdak, Deniz, Banner, Patrick R., Li, Yaxin, Muleady, Sean R., Gorshkov, Alexey V., Rolston, S. L., Porto, J. V.
Experimental control over the strength and angular dependence of interactions between atoms is a key capability for advancing quantum technologies. Here, we use microwave dressing to manipulate and enhance Rydberg-Rydberg interactions in an atomic en
Externí odkaz:
http://arxiv.org/abs/2411.08236
The advanced role-playing capabilities of Large Language Models (LLMs) have paved the way for developing Role-Playing Agents (RPAs). However, existing benchmarks, such as HPD, which incorporates manually scored character relationships into the contex
Externí odkaz:
http://arxiv.org/abs/2411.07965
With the rapid growth of digital information, personalized recommendation systems have become an indispensable part of Internet services, especially in the fields of e-commerce, social media, and online entertainment. However, traditional collaborati
Externí odkaz:
http://arxiv.org/abs/2411.06374
In this paper, a discrete reconfigurable intelligent surface (RIS)-assisted spatial shift keying (SSK) multiple-input multiple-output (MIMO) scheme is investigated, in which a direct link between the transmitter and the receiver is considered. To imp
Externí odkaz:
http://arxiv.org/abs/2411.00373
Autor:
Hu, Yue, Cai, Yuzhu, Du, Yaxin, Zhu, Xinyu, Liu, Xiangrui, Yu, Zijie, Hou, Yuchen, Tang, Shuo, Chen, Siheng
LLM-driven multi-agent collaboration (MAC) systems have demonstrated impressive capabilities in automatic software development at the function level. However, their heavy reliance on human design limits their adaptability to the diverse demands of re
Externí odkaz:
http://arxiv.org/abs/2410.16946
Despite the significant progress in multimodal large language models (MLLMs), their high computational cost remains a barrier to real-world deployment. Inspired by the mixture of depths (MoDs) in natural language processing, we aim to address this li
Externí odkaz:
http://arxiv.org/abs/2410.13859
By leveraging massively distributed data, federated learning (FL) enables collaborative instruction tuning of large language models (LLMs) in a privacy-preserving way. While FL effectively expands the data quantity, the issue of data quality remains
Externí odkaz:
http://arxiv.org/abs/2410.11540
Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with server-side public data for instruction augmentation, ultimately boosting model performance within specific domains. To date, the factors a
Externí odkaz:
http://arxiv.org/abs/2409.20135
Automated red teaming is an effective method for identifying misaligned behaviors in large language models (LLMs). Existing approaches, however, often focus primarily on improving attack success rates while overlooking the need for comprehensive test
Externí odkaz:
http://arxiv.org/abs/2409.16783
Autor:
Zhu, Minjie, Zhu, Yichen, Li, Jinming, Wen, Junjie, Xu, Zhiyuan, Liu, Ning, Cheng, Ran, Shen, Chaomin, Peng, Yaxin, Feng, Feifei, Tang, Jian
Diffusion Policy is a powerful technique tool for learning end-to-end visuomotor robot control. It is expected that Diffusion Policy possesses scalability, a key attribute for deep neural networks, typically suggesting that increasing model size woul
Externí odkaz:
http://arxiv.org/abs/2409.14411