Zobrazeno 1 - 10
of 114
pro vyhledávání: '"Abuadbba, Alsharif"'
Autor:
Abuadbba, Alsharif, Rhodes, Nicholas, Moore, Kristen, Sabir, Bushra, Wang, Shuo, Gao, Yansong
Deep learning solutions in critical domains like autonomous vehicles, facial recognition, and sentiment analysis require caution due to the severe consequences of errors. Research shows these models are vulnerable to adversarial attacks, such as data
Externí odkaz:
http://arxiv.org/abs/2407.01260
Text classifiers are vulnerable to adversarial examples -- correctly-classified examples that are deliberately transformed to be misclassified while satisfying acceptability constraints. The conventional approach to finding adversarial examples is to
Externí odkaz:
http://arxiv.org/abs/2405.11904
The increasing trend of using Large Language Models (LLMs) for code generation raises the question of their capability to generate trustworthy code. While many researchers are exploring the utility of code generation for uncovering software vulnerabi
Externí odkaz:
http://arxiv.org/abs/2404.03823
While location trajectories represent a valuable data source for analyses and location-based services, they can reveal sensitive information, such as political and religious preferences. Differentially private publication mechanisms have been propose
Externí odkaz:
http://arxiv.org/abs/2403.07218
Deepfakes have rapidly emerged as a profound and serious threat to society, primarily due to their ease of creation and dissemination. This situation has triggered an accelerated development of deepfake detection technologies. However, many existing
Externí odkaz:
http://arxiv.org/abs/2401.04364
Autor:
Ma, Hua, Wang, Shang, Gao, Yansong, Zhang, Zhi, Qiu, Huming, Xue, Minhui, Abuadbba, Alsharif, Fu, Anmin, Nepal, Surya, Abbott, Derek
All current backdoor attacks on deep learning (DL) models fall under the category of a vertical class backdoor (VCB) -- class-dependent. In VCB attacks, any sample from a class activates the implanted backdoor when the secret trigger is present. Exis
Externí odkaz:
http://arxiv.org/abs/2310.00542
Autor:
Gao, Yansong, Qiu, Huming, Zhang, Zhi, Wang, Binghui, Ma, Hua, Abuadbba, Alsharif, Xue, Minhui, Fu, Anmin, Nepal, Surya
Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning as a Service (MLaaS) to provide inference services.To steal model architectures that are of valuable intellectual properties, a class of attacks has be
Externí odkaz:
http://arxiv.org/abs/2309.11894
Autor:
Wang, Guohong, Ma, Hua, Gao, Yansong, Abuadbba, Alsharif, Zhang, Zhi, Kang, Wei, Al-Sarawib, Said F., Zhang, Gongxuan, Abbott, Derek
Image camouflage has been utilized to create clean-label poisoned images for implanting backdoor into a DL model. But there exists a crucial limitation that one attack/poisoned image can only fit a single input size of the DL model, which greatly inc
Externí odkaz:
http://arxiv.org/abs/2309.04036
Autor:
Cho, Beomsang, Le, Binh M., Kim, Jiwon, Woo, Simon, Tariq, Shahroz, Abuadbba, Alsharif, Moore, Kristen
Publikováno v:
32nd ACM International Conference on Information & Knowledge Management (CIKM), UK, 2023
Deepfakes have become a growing concern in recent years, prompting researchers to develop benchmark datasets and detection algorithms to tackle the issue. However, existing datasets suffer from significant drawbacks that hamper their effectiveness. N
Externí odkaz:
http://arxiv.org/abs/2309.01919
Security Application Programming Interfaces (APIs) are crucial for ensuring software security. However, their misuse introduces vulnerabilities, potentially leading to severe data breaches and substantial financial loss. Complex API design, inadequat
Externí odkaz:
http://arxiv.org/abs/2306.08869