Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Qi, Yuang"'
Recent research in provably secure neural linguistic steganography has overlooked a crucial aspect: the sender must detokenize stegotexts to avoid raising suspicion from the eavesdropper. The segmentation ambiguity problem, which arises when using la
Externí odkaz:
http://arxiv.org/abs/2403.17524
The rapid development of large language models (LLMs) has yielded impressive success in various downstream tasks. However, the vast potential and remarkable capabilities of LLMs also raise new security and privacy concerns if they are exploited for n
Externí odkaz:
http://arxiv.org/abs/2312.09669
Autor:
Tong, Meng, Chen, Kejiang, Zhang, Jie, Qi, Yuang, Zhang, Weiming, Yu, Nenghai, Zhang, Tianwei, Zhang, Zhikun
Large language models (LLMs), like ChatGPT, have greatly simplified text generation tasks. However, they have also raised concerns about privacy risks such as data leakage and unauthorized data collection. Existing solutions for privacy-preserving in
Externí odkaz:
http://arxiv.org/abs/2310.12214
Autor:
Yu, Xiao, Qi, Yuang, Chen, Kejiang, Chen, Guoqiang, Yang, Xi, Zhu, Pengyuan, Shang, Xiuwei, Zhang, Weiming, Yu, Nenghai
Large language models (LLMs) have the potential to generate texts that pose risks of misuse, such as plagiarism, planting fake reviews on e-commerce platforms, or creating inflammatory false tweets. Consequently, detecting whether a text is generated
Externí odkaz:
http://arxiv.org/abs/2305.12519
Autor:
Yang, Xi, Chen, Kejiang, Zhang, Weiming, Liu, Chang, Qi, Yuang, Zhang, Jie, Fang, Han, Yu, Nenghai
LLMs now exhibit human-like skills in various fields, leading to worries about misuse. Thus, detecting generated text is crucial. However, passive detection methods are stuck in domain specificity and limited adversarial robustness. To achieve reliab
Externí odkaz:
http://arxiv.org/abs/2305.08883
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Yu, Xiao, Qi, Yuang, Chen, Kejiang, Chen, Guoqiang, Yang, Xi, Zhu, Pengyuan, Zhang, Weiming, Yu, Nenghai
Large Language Models (LLMs) can generate texts that carry the risk of various misuses, including plagiarism, planting fake reviews on e-commerce platforms, or creating fake social media postings that can sway election results. Detecting whether a te
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::02a3758097c3d5d683f62453bfb8dd7a
Publikováno v:
Zhongguo yi liao qi xie za zhi = Chinese journal of medical instrumentation. 32(6)
The digital music editor software "Cool Edit Pro 2.0" is used to design a virtual hearing testing system. This system has following advantages. First, its signal frequency can be set at will. Second, its dynamic range of signal intensity can reach up