Zobrazeno 1 - 10
of 261
pro vyhledávání: '"Gao, Yansong"'
Autor:
Abuadbba, Alsharif, Rhodes, Nicholas, Moore, Kristen, Sabir, Bushra, Wang, Shuo, Gao, Yansong
Deep learning solutions in critical domains like autonomous vehicles, facial recognition, and sentiment analysis require caution due to the severe consequences of errors. Research shows these models are vulnerable to adversarial attacks, such as data
Externí odkaz:
http://arxiv.org/abs/2407.01260
Autor:
Zhai, Shengfang, Chen, Huanran, Dong, Yinpeng, Li, Jiajun, Shen, Qingni, Gao, Yansong, Su, Hang, Liu, Yang
Text-to-image diffusion models have achieved tremendous success in the field of controllable image generation, while also coming along with issues of privacy leakage and data copyrights. Membership inference arises in these contexts as a potential au
Externí odkaz:
http://arxiv.org/abs/2405.14800
Personal digital data is a critical asset, and governments worldwide have enforced laws and regulations to protect data privacy. Data users have been endowed with the right to be forgotten of their data. In the course of machine learning (ML), the fo
Externí odkaz:
http://arxiv.org/abs/2403.08254
The proliferation of cloud computing has greatly spurred the popularity of outsourced database storage and management, in which the cloud holding outsourced databases can process database queries on demand. Among others, skyline queries play an impor
Externí odkaz:
http://arxiv.org/abs/2310.07148
Autor:
Ma, Hua, Wang, Shang, Gao, Yansong, Zhang, Zhi, Qiu, Huming, Xue, Minhui, Abuadbba, Alsharif, Fu, Anmin, Nepal, Surya, Abbott, Derek
All current backdoor attacks on deep learning (DL) models fall under the category of a vertical class backdoor (VCB) -- class-dependent. In VCB attacks, any sample from a class activates the implanted backdoor when the secret trigger is present. Exis
Externí odkaz:
http://arxiv.org/abs/2310.00542
Autor:
Gao, Yansong, Qiu, Huming, Zhang, Zhi, Wang, Binghui, Ma, Hua, Abuadbba, Alsharif, Xue, Minhui, Fu, Anmin, Nepal, Surya
Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning as a Service (MLaaS) to provide inference services.To steal model architectures that are of valuable intellectual properties, a class of attacks has be
Externí odkaz:
http://arxiv.org/abs/2309.11894
Autor:
Wang, Guohong, Ma, Hua, Gao, Yansong, Abuadbba, Alsharif, Zhang, Zhi, Kang, Wei, Al-Sarawib, Said F., Zhang, Gongxuan, Abbott, Derek
Image camouflage has been utilized to create clean-label poisoned images for implanting backdoor into a DL model. But there exists a crucial limitation that one attack/poisoned image can only fit a single input size of the DL model, which greatly inc
Externí odkaz:
http://arxiv.org/abs/2309.04036
Vertical federated learning (VFL) has recently emerged as an appealing distributed paradigm empowering multi-party collaboration for training high-quality models over vertically partitioned datasets. Gradient boosting has been popularly adopted in VF
Externí odkaz:
http://arxiv.org/abs/2305.12652
Denoising diffusion probabilistic models (DDPMs) are a class of powerful generative models. The past few years have witnessed the great success of DDPMs in generating high-fidelity samples. A significant limitation of the DDPMs is the slow sampling p
Externí odkaz:
http://arxiv.org/abs/2304.11446
Radio frequency fingerprint identification (RFFI) is a lightweight device authentication technique particularly desirable for power-constrained devices, e.g., the Internet of things (IoT) devices. Similar to biometric fingerprinting, RFFI exploits th
Externí odkaz:
http://arxiv.org/abs/2302.13724