Zobrazeno 1 - 10
of 231
pro vyhledávání: '"Yang, Jiaxi"'
Autor:
Zhang, Lei, Li, Yunshui, Li, Jiaming, Xia, Xiaobo, Yang, Jiaxi, Luo, Run, Wang, Minzheng, Chen, Longze, Liu, Junhao, Yang, Min
Some recently developed code large language models (Code LLMs) have been pre-trained on repository-level code data (Repo-Code LLMs), enabling these models to recognize repository structures and utilize cross-file information for code completion. Howe
Externí odkaz:
http://arxiv.org/abs/2406.18294
Autor:
Li, Yunshui, Hui, Binyuan, Xia, Xiaobo, Yang, Jiaxi, Yang, Min, Zhang, Lei, Si, Shuzheng, Chen, Ling-Hao, Liu, Junhao, Liu, Tongliang, Huang, Fei, Li, Yongbin
Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce \tex
Externí odkaz:
http://arxiv.org/abs/2312.10302
Autor:
Zhang, Lei, Li, Yunshui, Liu, Ziqiang, yang, Jiaxi, Liu, Junhao, Chen, Longze, Luo, Run, Yang, Min
With the advancement of large language models (LLMs) and the expansion of their context windows, existing long-context benchmarks fall short in effectively evaluating the models' comprehension and reasoning abilities in extended texts. Moreover, conv
Externí odkaz:
http://arxiv.org/abs/2312.09542
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten. The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature. This
Externí odkaz:
http://arxiv.org/abs/2310.19218
TranDRL: A Transformer-Driven Deep Reinforcement Learning Enabled Prescriptive Maintenance Framework
Industrial systems demand reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime. This paper introduces an integrated framework that leverages the capabilities of the Transformer model-based neural networks a
Externí odkaz:
http://arxiv.org/abs/2309.16935
Due to the drawbacks of Federated Learning (FL) such as vulnerability of a single central server, centralized federated learning is shifting to decentralized federated learning, a paradigm which takes the advantages of blockchain. A key enabler for a
Externí odkaz:
http://arxiv.org/abs/2309.15348
Auction-based Federated Learning (AFL) enables open collaboration among self-interested data consumers and data owners. Existing AFL approaches are commonly under the assumption of sellers' market in that the service clients as sellers are treated as
Externí odkaz:
http://arxiv.org/abs/2309.05063
Autor:
Yang, Jiaxi, Hui, Binyuan, Yang, Min, Wang, Bailin, Li, Bowen, Li, Binhua, Huang, Fei, Li, Yongbin
Despite the advancements in in-context learning (ICL) for large language models (LLMs), current research centers on specific prompt engineering, such as demonstration selection, with the expectation that a single iteration of demonstrations processin
Externí odkaz:
http://arxiv.org/abs/2305.13016
In recent years, there has been a significant increase in attention towards designing incentive mechanisms for federated learning (FL). Tremendous existing studies attempt to design the solutions using various approaches (e.g., game theory, reinforce
Externí odkaz:
http://arxiv.org/abs/2305.04081
Autor:
Li, Jinyang, Hui, Binyuan, Qu, Ge, Yang, Jiaxi, Li, Binhua, Li, Bowen, Wang, Bailin, Qin, Bowen, Cao, Rongyu, Geng, Ruiying, Huo, Nan, Zhou, Xuanhe, Ma, Chenhao, Li, Guoliang, Chang, Kevin C. C., Huang, Fei, Cheng, Reynold, Li, Yongbin
Text-to-SQL parsing, which aims at converting natural language instructions into executable SQLs, has gained increasing attention in recent years. In particular, Codex and ChatGPT have shown impressive results in this task. However, most of the preva
Externí odkaz:
http://arxiv.org/abs/2305.03111