Zobrazeno 1 - 10
of 729
pro vyhledávání: '"LIU XIANGLONG"'
Autor:
Liang, Siyuan, Gong, Jiajun, Fang, Tianmeng, Liu, Aishan, Wang, Tao, Liu, Xianglong, Cao, Xiaochun, Tao, Dacheng, Ee-Chien, Chang
Website fingerprint (WF) attacks, which covertly monitor user communications to identify the web pages they visit, pose a serious threat to user privacy. Existing WF defenses attempt to reduce the attacker's accuracy by disrupting unique traffic patt
Externí odkaz:
http://arxiv.org/abs/2412.11471
Autor:
Wnag, Zining, Guo, Jinyang, Gong, Ruihao, Yong, Yang, Liu, Aishan, Huang, Yushi, Liu, Jiaheng, Liu, Xianglong
With the increased attention to model efficiency, post-training sparsity (PTS) has become more and more prevalent because of its effectiveness and efficiency. However, there remain questions on better practice of PTS algorithms and the sparsification
Externí odkaz:
http://arxiv.org/abs/2412.07268
Autor:
Hu, Jin, Liu, Xianglong, Wang, Jiakai, Zhang, Junkai, Yang, Xianqi, Qin, Haotong, Ma, Yuqing, Xu, Ke
Physical adversarial examples (PAEs) are regarded as "whistle-blowers" of real-world risks in deep-learning applications. However, current PAE generation studies show limited adaptive attacking ability to diverse and varying scenes. The key challenge
Externí odkaz:
http://arxiv.org/abs/2412.08053
Autor:
Zheng, Xingyu, Liu, Xianglong, Bian, Yichen, Ma, Xudong, Zhang, Yulun, Wang, Jiakai, Guo, Jinyang, Qin, Haotong
Diffusion models (DMs) have been significantly developed and widely used in various applications due to their excellent generative qualities. However, the expensive computation and massive parameters of DMs hinder their practical use in resource-cons
Externí odkaz:
http://arxiv.org/abs/2412.05926
Autor:
Wang, Jiakai, Zhang, Pengfei, Tao, Renshuai, Yang, Jian, Liu, Hao, Liu, Xianglong, Wei, Yunchao, Zhao, Yao
The various post-processing methods for deep-learning-based models, such as quantification, pruning, and fine-tuning, play an increasingly important role in artificial intelligence technology, with pre-train large models as one of the main developmen
Externí odkaz:
http://arxiv.org/abs/2412.01369
Autor:
Xiao, Yisong, Liu, Aishan, Zhang, Xinwei, Zhang, Tianyuan, Li, Tianlin, Liang, Siyuan, Liu, Xianglong, Liu, Yang, Tao, Dacheng
Pre-trained large deep learning models are now serving as the dominant component for downstream middleware users and have revolutionized the learning paradigm, replacing the traditional approach of training from scratch locally. To reduce development
Externí odkaz:
http://arxiv.org/abs/2412.00746
Autor:
Tao, Renshuai, Wang, Haoyu, Guo, Yuzhe, Chen, Hairong, Zhang, Li, Liu, Xianglong, Wei, Yunchao, Zhao, Yao
To detect prohibited items in challenging categories, human inspectors typically rely on images from two distinct views (vertical and side). Can AI detect prohibited items from dual-view X-ray images in the same way humans do? Existing X-ray datasets
Externí odkaz:
http://arxiv.org/abs/2411.18082
Autor:
Zhang, Tianyuan, Wang, Lu, Zhang, Xinwei, Zhang, Yitong, Jia, Boyi, Liang, Siyuan, Hu, Shengshan, Fu, Qiang, Liu, Aishan, Liu, Xianglong
Vision-language models (VLMs) have significantly advanced autonomous driving (AD) by enhancing reasoning capabilities. However, these models remain highly vulnerable to adversarial attacks. While existing research has primarily focused on general VLM
Externí odkaz:
http://arxiv.org/abs/2411.18275
Autor:
Yang, Ge, He, Changyi, Guo, Jinyang, Wu, Jianyu, Ding, Yifu, Liu, Aishan, Qin, Haotong, Ji, Pengliang, Liu, Xianglong
Although large language models (LLMs) have demonstrated their strong intelligence ability, the high demand for computation and storage hinders their practical application. To this end, many model compression techniques are proposed to increase the ef
Externí odkaz:
http://arxiv.org/abs/2410.21352
Autor:
Ying, Zonghao, Liu, Aishan, Liang, Siyuan, Huang, Lei, Guo, Jinyang, Zhou, Wenbo, Liu, Xianglong, Tao, Dacheng
Multimodal Large Language Models (MLLMs) are showing strong safety concerns (e.g., generating harmful outputs for users), which motivates the development of safety evaluation benchmarks. However, we observe that existing safety benchmarks for MLLMs s
Externí odkaz:
http://arxiv.org/abs/2410.18927