Zobrazeno 1 - 10
of 1 464
pro vyhledávání: '"Said, F."'
Autor:
Wang, Guohong, Ma, Hua, Gao, Yansong, Abuadbba, Alsharif, Zhang, Zhi, Kang, Wei, Al-Sarawib, Said F., Zhang, Gongxuan, Abbott, Derek
Image camouflage has been utilized to create clean-label poisoned images for implanting backdoor into a DL model. But there exists a crucial limitation that one attack/poisoned image can only fit a single input size of the DL model, which greatly inc
Externí odkaz:
http://arxiv.org/abs/2309.04036
Autor:
Ma, Hua, Li, Yinshan, Gao, Yansong, Zhang, Zhi, Abuadbba, Alsharif, Fu, Anmin, Al-Sarawi, Said F., Surya, Nepal, Abbott, Derek
Object detection is the foundation of various critical computer-vision tasks such as segmentation, object tracking, and event detection. To train an object detector with satisfactory accuracy, a large amount of data is required. However, due to the i
Externí odkaz:
http://arxiv.org/abs/2209.02339
Autor:
Gao, Yansong, Yao, Jianrong, Pang, Lihui, Yang, Wei, Fu, Anmin, Al-Sarawi, Said F., Abbott, Derek
To improve the modeling resilience of silicon strong physical unclonable functions (PUFs), in particular, the APUFs, that yield a very large number of challenge response pairs (CRPs), a number of composited APUF variants such as XOR-APUF, interpose-P
Externí odkaz:
http://arxiv.org/abs/2207.09744
Autor:
Ma, Hua, Li, Qun, Zheng, Yifeng, Zhang, Zhi, Liu, Xiaoning, Gao, Yansong, Al-Sarawi, Said F., Abbott, Derek
Federated Learning (FL), a distributed machine learning paradigm, has been adapted to mitigate privacy concerns for customers. Despite their appeal, there are various inference attacks that can exploit shared-plaintext model updates to embed traces o
Externí odkaz:
http://arxiv.org/abs/2207.09080
Publikováno v:
IEEE Access, Vol 12, Pp 33843-33851 (2024)
The motivation for the development of multi-exit networks (MENs) lies in the desire to minimize the delay and energy consumption associated with the inference phase. Moreover, MENs are designed to expedite predictions for easily identifiable inputs b
Externí odkaz:
https://doaj.org/article/375fdbd74a284b5195f9843324df5de6
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Autor:
Ma, Hua, Li, Yinshan, Gao, Yansong, Abuadbba, Alsharif, Zhang, Zhi, Fu, Anmin, Kim, Hyoungshick, Al-Sarawi, Said F., Surya, Nepal, Abbott, Derek
Deep learning models have been shown to be vulnerable to recent backdoor attacks. A backdoored model behaves normally for inputs containing no attacker-secretly-chosen trigger and maliciously for inputs with the trigger. To date, backdoor attacks and
Externí odkaz:
http://arxiv.org/abs/2201.08619
Autor:
Li, Yinshan, Ma, Hua, Zhang, Zhi, Gao, Yansong, Abuadbba, Alsharif, Fu, Anmin, Zheng, Yifeng, Al-Sarawi, Said F., Abbott, Derek
A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments. State-of-the-art defenses are either limited to specific b
Externí odkaz:
http://arxiv.org/abs/2111.11157
Autor:
Zhu, Yifan, Peng, Huaibing, Fu, Anmin, Yang, Wei, Ma, Hua, Al-Sarawi, Said F., Abbott, Derek, Gao, Yansong
Publikováno v:
In Expert Systems With Applications 1 December 2024 255 Part B
Autor:
Qiu, Huming, Ma, Hua, Zhang, Zhi, Zheng, Yifeng, Fu, Anmin, Zhou, Pan, Gao, Yansong, Abbott, Derek, Al-Sarawi, Said F.
Though deep neural network models exhibit outstanding performance for various applications, their large model size and extensive floating-point operations render deployment on mobile computing platforms a major challenge, and, in particular, on Inter
Externí odkaz:
http://arxiv.org/abs/2105.03822
Autor:
Wang, Guohong, Ma, Hua, Gao, Yansong, Abuadbba, Alsharif, Zhang, Zhi, Kang, Wei, Al-Sarawi, Said F., Zhang, Gongxuan, Abbott, Derek
Publikováno v:
In Knowledge-Based Systems 15 March 2024 288