Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Ling, Zixuan"'
Autor:
Lv, Changze, Gu, Yufei, Guo, Zhengkang, Xu, Zhibo, Wu, Yixin, Zhang, Feiran, Shi, Tianyuan, Wang, Zhenghua, Yin, Ruicheng, Shang, Yu, Zhong, Siqi, Wang, Xiaohua, Wu, Muling, Liu, Wenhao, Li, Tianlong, Zhu, Jianhao, Zhang, Cenyuan, Ling, Zixuan, Zheng, Xiaoqing
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning, which uses a gradient descent method to update network weights by minimizing the discrepancy between actual and desired outputs. Despite its pivotal role
Externí odkaz:
http://arxiv.org/abs/2406.16062
Autor:
Zhu, JianHao, Lv, Changze, Wang, Xiaohua, Wu, Muling, Liu, Wenhao, Li, Tianlong, Ling, Zixuan, Zhang, Cenyuan, Zheng, Xiaoqing, Huang, Xuanjing
Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process. However, the development of lar
Externí odkaz:
http://arxiv.org/abs/2406.10976
Autor:
Zhang, Cenyuan, Zheng, Xiaoqing, Yin, Ruicheng, Geng, Shujie, Xu, Jianhan, Gao, Xuan, Lv, Changze, Ling, Zixuan, Huang, Xuanjing, Cao, Miao, Feng, Jianfeng
Deciphering natural language from brain activity through non-invasive devices remains a formidable challenge. Previous non-invasive decoders either require multiple experiments with identical stimuli to pinpoint cortical regions and enhance signal-to
Externí odkaz:
http://arxiv.org/abs/2403.11183
Autor:
Wu, Muling, Liu, Wenhao, Wang, Xiaohua, Li, Tianlong, Lv, Changze, Ling, Zixuan, Zhu, Jianhao, Zhang, Cenyuan, Zheng, Xiaoqing, Huang, Xuanjing
Parameter Efficient Fine-Tuning (PEFT) techniques have drawn significant attention due to their ability to yield competitive results while updating only a small portion of the adjustable parameters. However, existing PEFT methods pose challenges in h
Externí odkaz:
http://arxiv.org/abs/2402.15179
Autor:
Liu, Wenhao, Wang, Xiaohua, Wu, Muling, Li, Tianlong, Lv, Changze, Ling, Zixuan, Zhu, Jianhao, Zhang, Cenyuan, Zheng, Xiaoqing, Huang, Xuanjing
Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness. Existing methods for achieving this alignment often involves employi
Externí odkaz:
http://arxiv.org/abs/2312.15997
Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons
Autor:
Li, Tianlong, Dou, Shihan, Lv, Changze, Liu, Wenhao, Xu, Jianhan, Wu, Muling, Ling, Zixuan, Zheng, Xiaoqing, Huang, Xuanjing
Personality plays a pivotal role in shaping human expression patterns, thus regulating the personality of large language models (LLMs) holds significant potential in enhancing the user experience of LLMs. Previous methods either relied on fine-tuning
Externí odkaz:
http://arxiv.org/abs/2310.16582
Autor:
Lv, Changze, Li, Tianlong, Xu, Jianhan, Gu, Chenxi, Ling, Zixuan, Zhang, Cenyuan, Zheng, Xiaoqing, Huang, Xuanjing
Spiking neural networks (SNNs) offer a promising avenue to implement deep neural networks in a more energy-efficient way. However, the network architectures of existing SNNs for language tasks are still simplistic and relatively shallow, and deep arc
Externí odkaz:
http://arxiv.org/abs/2308.15122