Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Lv, Changze"'
Autor:
Dou, Shihan, Zhang, Jiazheng, Zang, Jianxiang, Tao, Yunbo, Jia, Haoxiang, Liu, Shichun, Yang, Yuming, Wu, Shenxi, Zhang, Shaoqing, Wu, Muling, Lv, Changze, Xiong, Limao, Zhan, Wenyu, Zhang, Lin, Weng, Rongxiang, Wang, Jingang, Cai, Xunliang, Wu, Yueming, Wen, Ming, Zheng, Rui, Ji, Tao, Cao, Yixin, Gui, Tao, Qiu, Xipeng, Zhang, Qi, Huang, Xuanjing
We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs). It can automatically identify the programming lang
Externí odkaz:
http://arxiv.org/abs/2410.23074
Autor:
Wang, Xiaohua, Wang, Zhenghua, Gao, Xuan, Zhang, Feiran, Wu, Yixin, Xu, Zhibo, Shi, Tianyuan, Wang, Zhengyuan, Li, Shizheng, Qian, Qi, Yin, Ruicheng, Lv, Changze, Zheng, Xiaoqing, Huang, Xuanjing
Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains. While many RAG approaches have been pro
Externí odkaz:
http://arxiv.org/abs/2407.01219
Spiking neural networks (SNNs) offer a promising pathway to implement deep neural networks (DNNs) in a more energy-efficient manner since their neurons are sparsely activated and inferences are event-driven. However, there have been very few works th
Externí odkaz:
http://arxiv.org/abs/2406.19230
Autor:
Lv, Changze, Gu, Yufei, Guo, Zhengkang, Xu, Zhibo, Wu, Yixin, Zhang, Feiran, Shi, Tianyuan, Wang, Zhenghua, Yin, Ruicheng, Shang, Yu, Zhong, Siqi, Wang, Xiaohua, Wu, Muling, Liu, Wenhao, Li, Tianlong, Zhu, Jianhao, Zhang, Cenyuan, Ling, Zixuan, Zheng, Xiaoqing
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning, which uses a gradient descent method to update network weights by minimizing the discrepancy between actual and desired outputs. Despite its pivotal role
Externí odkaz:
http://arxiv.org/abs/2406.16062
Autor:
Zhu, JianHao, Lv, Changze, Wang, Xiaohua, Wu, Muling, Liu, Wenhao, Li, Tianlong, Ling, Zixuan, Zhang, Cenyuan, Zheng, Xiaoqing, Huang, Xuanjing
Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process. However, the development of lar
Externí odkaz:
http://arxiv.org/abs/2406.10976
Spiking neural networks (SNNs) represent a promising approach to developing artificial neural networks that are both energy-efficient and biologically plausible. However, applying SNNs to sequential tasks, such as text classification and time-series
Externí odkaz:
http://arxiv.org/abs/2405.14362
Autor:
Zhang, Cenyuan, Zheng, Xiaoqing, Yin, Ruicheng, Geng, Shujie, Xu, Jianhan, Gao, Xuan, Lv, Changze, Ling, Zixuan, Huang, Xuanjing, Cao, Miao, Feng, Jianfeng
Deciphering natural language from brain activity through non-invasive devices remains a formidable challenge. Previous non-invasive decoders either require multiple experiments with identical stimuli to pinpoint cortical regions and enhance signal-to
Externí odkaz:
http://arxiv.org/abs/2403.11183
Autor:
Wu, Muling, Liu, Wenhao, Wang, Xiaohua, Li, Tianlong, Lv, Changze, Ling, Zixuan, Zhu, Jianhao, Zhang, Cenyuan, Zheng, Xiaoqing, Huang, Xuanjing
Parameter Efficient Fine-Tuning (PEFT) techniques have drawn significant attention due to their ability to yield competitive results while updating only a small portion of the adjustable parameters. However, existing PEFT methods pose challenges in h
Externí odkaz:
http://arxiv.org/abs/2402.15179
Spiking neural networks (SNNs), inspired by the spiking behavior of biological neurons, provide a unique pathway for capturing the intricacies of temporal data. However, applying SNNs to time-series forecasting is challenging due to difficulties in e
Externí odkaz:
http://arxiv.org/abs/2402.01533
Autor:
Li, Tianlong, Dou, Shihan, Liu, Wenhao, Wu, Muling, Lv, Changze, Zheng, Rui, Zheng, Xiaoqing, Huang, Xuanjing
The recent surge in jailbreaking methods has revealed the vulnerability of Large Language Models (LLMs) to malicious inputs. While earlier research has primarily concentrated on increasing the success rates of jailbreaking attacks, the underlying mec
Externí odkaz:
http://arxiv.org/abs/2401.06824