Zobrazeno 1 - 10
of 7 999
pro vyhledávání: '"Ma,Ming"'
Autor:
Du, Yanrui, Zhao, Sendong, Cao, Jiawei, Ma, Ming, Zhao, Danyang, Fan, Fenglei, Liu, Ting, Qin, Bing
Instruction Fine-Tuning (IFT) has become an essential method for adapting base Large Language Models (LLMs) into variants for professional and private use. However, researchers have raised concerns over a significant decrease in LLMs' security follow
Externí odkaz:
http://arxiv.org/abs/2410.04524
Accurate channel estimation in orthogonal time frequency space (OTFS) systems with massive multiple-input multiple-output (MIMO) configurations is challenging due to high-dimensional sparse representation (SR). Existing methods often face performance
Externí odkaz:
http://arxiv.org/abs/2408.12239
Interfacial hydration structures are crucial in wide-ranging applications, including battery, colloid, lubrication etc. Multivalent ions like Mg2+ and La3+ show irreplaceable roles in these applications, which are hypothesized due to their unique int
Externí odkaz:
http://arxiv.org/abs/2406.18827
Large Language Models (LLMs) have demonstrated remarkable abilities, one of the most important being In-Context Learning (ICL). With ICL, LLMs can derive the underlying rule from a few demonstrations and provide answers that comply with the rule. Pre
Externí odkaz:
http://arxiv.org/abs/2406.16007
Autor:
Du, Yanrui, Zhao, Sendong, Zhao, Danyang, Ma, Ming, Chen, Yuhan, Huo, Liangyu, Yang, Qing, Xu, Dongliang, Qin, Bing
Large Language Models (LLMs) are increasingly deployed in various applications. As their usage grows, concerns regarding their safety are rising, especially in maintaining harmless responses when faced with malicious instructions. Many defense strate
Externí odkaz:
http://arxiv.org/abs/2405.14488
Extensive work has been devoted to improving the safety mechanism of Large Language Models (LLMs). However, LLMs still tend to generate harmful responses when faced with malicious instructions, a phenomenon referred to as "Jailbreak Attack". In our r
Externí odkaz:
http://arxiv.org/abs/2312.04127
Uplift modeling has shown very promising results in online marketing. However, most existing works are prone to the robustness challenge in some practical applications. In this paper, we first present a possible explanation for the above phenomenon.
Externí odkaz:
http://arxiv.org/abs/2310.04693
Extensive studies have been devoted to privatizing general-domain Large Language Models (LLMs) as Domain-Specific LLMs via feeding specific-domain data. However, these privatization efforts often ignored a critical aspect: Dual Logic Ability, which i
Externí odkaz:
http://arxiv.org/abs/2309.04198