Zobrazeno 1 - 10
of 619
pro vyhledávání: '"Zhao, Zhengyu"'
Code Language Models (CLMs) have achieved tremendous progress in source code understanding and generation, leading to a significant increase in research interests focused on applying CLMs to real-world software engineering tasks in recent years. Howe
Externí odkaz:
http://arxiv.org/abs/2411.07597
Machine learning (ML) has demonstrated significant advancements in Android malware detection (AMD); however, the resilience of ML against realistic evasion attacks remains a major obstacle for AMD. One of the primary factors contributing to this chal
Externí odkaz:
http://arxiv.org/abs/2408.16025
Despite prior safety alignment efforts, mainstream LLMs can still generate harmful and unethical content when subjected to jailbreaking attacks. Existing jailbreaking methods fall into two main categories: template-based and optimization-based method
Externí odkaz:
http://arxiv.org/abs/2408.11313
Deep generative models have demonstrated impressive performance in various computer vision applications, including image synthesis, video generation, and medical analysis. Despite their significant advancements, these models may be used for malicious
Externí odkaz:
http://arxiv.org/abs/2407.10575
Autor:
Yang, Yulong, Yang, Xinshan, Li, Shuaidong, Lin, Chenhao, Zhao, Zhengyu, Shen, Chao, Zhang, Tianwei
The rapid progress in the reasoning capability of the Multi-modal Large Language Models (MLLMs) has triggered the development of autonomous agent systems on mobile devices. MLLM-based mobile agent systems consist of perception, reasoning, memory, and
Externí odkaz:
http://arxiv.org/abs/2407.09295
Recent research in adversarial machine learning has focused on visual perception in Autonomous Driving (AD) and has shown that printed adversarial patches can attack object detectors. However, it is important to note that AD visual perception encompa
Externí odkaz:
http://arxiv.org/abs/2406.05810
Autonomous Driving (AD) systems critically depend on visual perception for real-time object detection and multiple object tracking (MOT) to ensure safe driving. However, high latency in these visual perception components can lead to significant safet
Externí odkaz:
http://arxiv.org/abs/2406.05800
Publikováno v:
Data Intelligence, Vol 1, Iss 2, Pp 187-200 (2019)
The human-computer dialogue has recently attracted extensive attention from both academia and industry as an important branch in the field of artificial intelligence (AI). However, there are few studies on the evaluation of large-scale Chinese human-
Externí odkaz:
https://doaj.org/article/b56762fbff2d407abdfea065d4934d08
Deep learning-based monocular depth estimation (MDE), extensively applied in autonomous driving, is known to be vulnerable to adversarial attacks. Previous physical attacks against MDE models rely on 2D adversarial patches, so they only affect a smal
Externí odkaz:
http://arxiv.org/abs/2403.17301
Autor:
Yang, Bo, Zhang, Hengwei, Wang, Jindong, Yang, Yulong, Lin, Chenhao, Shen, Chao, Zhao, Zhengyu
Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge. A conventional recipe for maximizing transferability is to keep only the optimal adversarial example from a
Externí odkaz:
http://arxiv.org/abs/2402.18370