Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Tao, Renshuai"'
ODDN: Addressing Unpaired Data Challenges in Open-World Deepfake Detection on Online Social Networks
Despite significant advances in deepfake detection, handling varying image quality, especially due to different compressions on online social networks (OSNs), remains challenging. Current methods succeed by leveraging correlations between paired imag
Externí odkaz:
http://arxiv.org/abs/2410.18687
Autor:
Xue, Yanni, Hao, Haojie, Wang, Jiakai, Sheng, Qiang, Tao, Renshuai, Liang, Yu, Feng, Pu, Liu, Xianglong
While neural machine translation (NMT) models achieve success in our daily lives, they show vulnerability to adversarial attacks. Despite being harmful, these attacks also offer benefits for interpreting and enhancing NMT models, thus drawing increas
Externí odkaz:
http://arxiv.org/abs/2409.05021
Autor:
Tan, Chuangchuang, Tao, Renshuai, Liu, Huan, Gu, Guanghua, Wu, Baoyuan, Zhao, Yao, Wei, Yunchao
This work focuses on AIGC detection to develop universal detectors capable of identifying various types of forgery images. Recent studies have found large pre-trained models, such as CLIP, are effective for generalizable deepfake detection along with
Externí odkaz:
http://arxiv.org/abs/2408.09647
Recently, the proliferation of increasingly realistic synthetic images generated by various generative adversarial networks has increased the risk of misuse. Consequently, there is a pressing need to develop a generalizable detector for accurately re
Externí odkaz:
http://arxiv.org/abs/2403.06803
Autor:
Liu, Aishan, Guo, Jun, Wang, Jiakai, Liang, Siyuan, Tao, Renshuai, Zhou, Wenbo, Liu, Cong, Liu, Xianglong, Tao, Dacheng
Adversarial attacks are valuable for evaluating the robustness of deep learning models. Existing attacks are primarily conducted on the visible light spectrum (e.g., pixel-wise texture perturbation). However, attacks targeting texture-free X-ray imag
Externí odkaz:
http://arxiv.org/abs/2302.09491
Autor:
Wang, Jiakai, Yin, Zixin, Hu, Pengfei, Liu, Aishan, Tao, Renshuai, Qin, Haotong, Liu, Xianglong, Tao, Dacheng
To operate in real-world high-stakes environments, deep learning systems have to endure noises that have been continuously thwarting their robustness. Data-end defense, which improves robustness by operations on input data instead of modifying models
Externí odkaz:
http://arxiv.org/abs/2204.06213
Autor:
Tao, Renshuai, Wei, Yanlu, Jiang, Xiangjian, Li, Hainan, Qin, Haotong, Wang, Jiakai, Ma, Yuqing, Zhang, Libo, Liu, Xianglong
Prohibited items detection in X-ray images often plays an important role in protecting public safety, which often deals with color-monotonous and luster-insufficient objects, resulting in unsatisfactory performance. Till now, there have been rare stu
Externí odkaz:
http://arxiv.org/abs/2108.09917
Learning from multiple annotators aims to induce a high-quality classifier from training instances, where each of them is associated with a set of possibly noisy labels provided by multiple annotators under the influence of their varying abilities an
Externí odkaz:
http://arxiv.org/abs/2106.15146
Few-shot learning is an interesting and challenging study, which enables machines to learn from few samples like humans. Existing studies rarely exploit auxiliary information from large amount of unlabeled data. Self-supervised learning is emerged as
Externí odkaz:
http://arxiv.org/abs/2103.05985
Autor:
Zhang, Xiangguo, Qin, Haotong, Ding, Yifu, Gong, Ruihao, Yan, Qinghua, Tao, Renshuai, Li, Yuhang, Yu, Fengwei, Liu, Xianglong
Quantization has emerged as one of the most prevalent approaches to compress and accelerate neural networks. Recently, data-free quantization has been widely studied as a practical and promising solution. It synthesizes data for calibrating the quant
Externí odkaz:
http://arxiv.org/abs/2103.01049