Zobrazeno 1 - 10
of 266
pro vyhledávání: '"Xiang Chong"'
Publikováno v:
Open Medicine, Vol 18, Iss 1, Pp 93-112 (2023)
Intracranial aneurysm (IA) is a type of cerebrovascular disease that mainly occurs in the circle of Willis. Abnormalities in RNA methylation at the N6-methyladenosine (m6A) site have been associated with numerous types of human diseases. WTAP recruit
Externí odkaz:
https://doaj.org/article/18c18b49c60146829602974d04b2f424
Achieving a balance between accuracy and efficiency is a critical challenge in facial landmark detection (FLD). This paper introduces the Parallel Optimal Position Search (POPoS), a high-precision encoding-decoding framework designed to address the f
Externí odkaz:
http://arxiv.org/abs/2410.09583
Autor:
Wu, Tong, Zhang, Shujian, Song, Kaiqiang, Xu, Silei, Zhao, Sanqiang, Agrawal, Ravi, Indurthi, Sathish Reddy, Xiang, Chong, Mittal, Prateek, Zhou, Wenxuan
Large Language Models (LLMs) are susceptible to security and safety threats, such as prompt injection, prompt extraction, and harmful requests. One major cause of these vulnerabilities is the lack of an instruction hierarchy. Modern LLM architectures
Externí odkaz:
http://arxiv.org/abs/2410.09102
Retrieval-augmented generation (RAG) has been shown vulnerable to retrieval corruption attacks: an attacker can inject malicious passages into retrieval results to induce inaccurate responses. In this paper, we propose RobustRAG as the first defense
Externí odkaz:
http://arxiv.org/abs/2405.15556
Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger than co
Externí odkaz:
http://arxiv.org/abs/2405.01349
State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility. However, this impressive performance typically comes at the cost of 10-100x more inference-time computati
Externí odkaz:
http://arxiv.org/abs/2310.13076
The bulk of existing research in defending against adversarial examples focuses on defending against a single (typically bounded Lp-norm) attack, but for a practical setting, machine learning (ML) models should be robust to a wide variety of attacks.
Externí odkaz:
http://arxiv.org/abs/2302.10980
Publikováno v:
In Applied Geochemistry November 2024 175
Object detectors, which are widely deployed in security-critical systems such as autonomous vehicles, have been found vulnerable to patch hiding attacks. An attacker can use a single physically-realizable adversarial patch to make the object detector
Externí odkaz:
http://arxiv.org/abs/2202.01811
Publikováno v:
In Heliyon 30 September 2024 10(18)