Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Liang, Kaisheng"'
Deep neural networks exhibit vulnerability to adversarial examples that can transfer across different models. A particularly challenging problem is developing transferable targeted attacks that can mislead models into predicting specific target class
Externí odkaz:
http://arxiv.org/abs/2411.15553
Unrestricted adversarial attacks present a serious threat to deep learning models and adversarial defense techniques. They pose severe security problems for deep learning applications because they can effectively bypass defense mechanisms. However, p
Externí odkaz:
http://arxiv.org/abs/2307.12499
Autor:
Liang, Kaisheng, Xiao, Bin
Adversarial attacks can mislead deep neural networks (DNNs) by adding imperceptible perturbations to benign examples. The attack transferability enables adversarial examples to attack black-box DNNs with unknown architectures or parameters, which pos
Externí odkaz:
http://arxiv.org/abs/2304.11579
The multi-volume set of LNCS books with volume numbers 15059 upto 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024. The 2387 papers p
Autor:
Alessandro Crimi, Spyridon Bakas
The two-volume set LNCS 11992 and 11993 constitutes the thoroughly refereed proceedings of the 5th International MICCAI Brainlesion Workshop, BrainLes 2019, the International Multimodal Brain Tumor Segmentation (BraTS) challenge, the Computational Pr