Zobrazeno 1 - 10
of 33
pro vyhledávání: '"Xiao, Yisong"'
Autor:
Xiao, Yisong, Liu, Aishan, Cheng, QianJia, Yin, Zhenfei, Liang, Siyuan, Li, Jiapeng, Shao, Jing, Liu, Xianglong, Tao, Dacheng
Large Vision-Language Models (LVLMs) have been widely adopted in various applications; however, they exhibit significant gender biases. Existing benchmarks primarily evaluate gender bias at the demographic group level, neglecting individual fairness,
Externí odkaz:
http://arxiv.org/abs/2407.00600
Autor:
Zhang, Tianyuan, Wang, Lu, Li, Hainan, Xiao, Yisong, Liang, Siyuan, Liu, Aishan, Liu, Xianglong, Tao, Dacheng
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering. Existing LD benchmarks primarily focus on evaluating common cases, neglectin
Externí odkaz:
http://arxiv.org/abs/2406.00934
Autor:
Liu, Aishan, Zhang, Xinwei, Xiao, Yisong, Zhou, Yuguang, Liang, Siyuan, Wang, Jiakai, Liu, Xianglong, Cao, Xiaochun, Tao, Dacheng
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks. However, the presence of backdoors within PVMs poses significant threats. Unfortunately, existing studies pri
Externí odkaz:
http://arxiv.org/abs/2312.15172
Quantization has emerged as an essential technique for deploying deep neural networks (DNNs) on devices with limited resources. However, quantized models exhibit vulnerabilities when exposed to various noises in real-world applications. Despite the i
Externí odkaz:
http://arxiv.org/abs/2308.02350
Autor:
Guo, Jun, Liu, Aishan, Zheng, Xingyu, Liang, Siyuan, Xiao, Yisong, Wu, Yichao, Liu, Xianglong
Despite the broad application of Machine Learning models as a Service (MLaaS), they are vulnerable to model stealing attacks. These attacks can replicate the model functionality by using the black-box query process without any prior knowledge of the
Externí odkaz:
http://arxiv.org/abs/2308.00958
Publikováno v:
ISSTA 2023: Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis
Machine learning (ML) systems have achieved remarkable performance across a wide area of applications. However, they frequently exhibit unfair behaviors in sensitive application domains, raising severe fairness concerns. To evaluate and test fairness
Externí odkaz:
http://arxiv.org/abs/2305.11602
The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios. Although several methods have been proposed to enhance cross-model transferability, little att
Externí odkaz:
http://arxiv.org/abs/2304.05402
Adversarial attacks in the physical world can harm the robustness of detection models. Evaluating the robustness of detection models in the physical world can be challenging due to the time-consuming and labor-intensive nature of many experiments. Th
Externí odkaz:
http://arxiv.org/abs/2304.05098
Quantization has emerged as an essential technique for deploying deep neural networks (DNNs) on devices with limited resources. However, quantized models exhibit vulnerabilities when exposed to various noises in real-world applications. Despite the i
Externí odkaz:
http://arxiv.org/abs/2304.03968
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.