Zobrazeno 1 - 10
of 96
pro vyhledávání: '"Jiang Fengqing"'
Instruction tuning has been widely adopted to ensure large language models (LLMs) follow user instructions effectively. The resulting instruction-following capabilities of LLMs heavily rely on the instruction datasets used for tuning. Recently, synth
Externí odkaz:
http://arxiv.org/abs/2411.07133
Autor:
Li, Yuetai, Xu, Zhangchen, Jiang, Fengqing, Niu, Luyao, Sahabandu, Dinuka, Ramasubramanian, Bhaskar, Poovendran, Radha
The remarkable performance of large language models (LLMs) in generation tasks has enabled practitioners to leverage publicly available models to power custom applications, such as chatbots and virtual assistants. However, the data used to train or f
Externí odkaz:
http://arxiv.org/abs/2406.12257
Large language models (LLMs) are expected to follow instructions from users and engage in conversations. Techniques to enhance LLMs' instruction-following capabilities typically fine-tune them using data structured according to a predefined chat temp
Externí odkaz:
http://arxiv.org/abs/2406.12935
Autor:
Xu, Zhangchen, Jiang, Fengqing, Niu, Luyao, Deng, Yuntian, Poovendran, Radha, Choi, Yejin, Lin, Bill Yuchen
High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor cos
Externí odkaz:
http://arxiv.org/abs/2406.08464
In Federated Learning (FL), a set of clients collaboratively train a machine learning model (called global model) without sharing their local training data. The local training data of clients is typically non-i.i.d. and heterogeneous, resulting in va
Externí odkaz:
http://arxiv.org/abs/2405.20975
Autor:
Jiang, Fengqing, Xu, Zhangchen, Niu, Luyao, Xiang, Zhen, Ramasubramanian, Bhaskar, Li, Bo, Poovendran, Radha
Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for saf
Externí odkaz:
http://arxiv.org/abs/2402.11753
Autor:
Xu, Zhangchen, Jiang, Fengqing, Niu, Luyao, Jia, Jinyuan, Lin, Bill Yuchen, Poovendran, Radha
As large language models (LLMs) become increasingly integrated into real-world applications such as code generation and chatbot assistance, extensive efforts have been made to align LLM behavior with human values, including safety. Jailbreak attacks,
Externí odkaz:
http://arxiv.org/abs/2402.08983
Autor:
Xiang, Zhen, Jiang, Fengqing, Xiong, Zidi, Ramasubramanian, Bhaskar, Poovendran, Radha, Li, Bo
Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes. On the other hand, COT prompting also poses new vulnerabilities in the form of back
Externí odkaz:
http://arxiv.org/abs/2401.12242
Federated learning (FL) enables multiple participants to train a global machine learning model without sharing their private training data. Peer-to-peer (P2P) FL advances existing centralized FL paradigms by eliminating the server that aggregates loc
Externí odkaz:
http://arxiv.org/abs/2401.05562
Autor:
Rajabi, Arezoo, Asokraj, Surudhi, Jiang, Fengqing, Niu, Luyao, Ramasubramanian, Bhaskar, Ritcey, Jim, Poovendran, Radha
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN suc
Externí odkaz:
http://arxiv.org/abs/2308.15673