Zobrazeno 1 - 10
of 16 228
pro vyhledávání: '"HONG YUAN"'
Federated learning (FL) is an emerging collaborative learning paradigm that aims to protect data privacy. Unfortunately, recent works show FL algorithms are vulnerable to the serious data reconstruction attacks. However, existing works lack a theoret
Externí odkaz:
http://arxiv.org/abs/2408.12119
Autor:
Wang, Chien-Yao, Liao, Hong-Yuan Mark
This is a comprehensive review of the YOLO series of systems. Different from previous literature surveys, this review article re-examines the characteristics of the YOLO series from the latest technical point of view. At the same time, we also analyz
Externí odkaz:
http://arxiv.org/abs/2408.09332
Federated Learning (FL) is a novel client-server distributed learning framework that can protect data privacy. However, recent works show that FL is vulnerable to poisoning attacks. Many defenses with robust aggregators (AGRs) are proposed to mitigat
Externí odkaz:
http://arxiv.org/abs/2407.15267
Autor:
Feng, Shuya, Mohammady, Meisam, Hong, Hanbin, Yan, Shenao, Kundu, Ashish, Wang, Binghui, Hong, Yuan
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training while ensuring provable privacy for clients. However, optimizing the tradeoff between privacy and accuracy remains a critical challenge. To ou
Externí odkaz:
http://arxiv.org/abs/2407.14710
Federated graph learning (FedGL) is an emerging federated learning (FL) framework that extends FL to learn graph data from diverse sources. FL for non-graph data has shown to be vulnerable to backdoor attacks, which inject a shared backdoor trigger i
Externí odkaz:
http://arxiv.org/abs/2407.08935
Autor:
Hong, Yuan, Fu, Zhen-Guo, Chen, Zhou-Wei-Yu, Chi, Feng, Wang, Zhigang, Zhang, Wei, Zhang, Ping
We study the resonant tunneling in double quantum dots (DQD) sandwiched between surfaces of topological insulator (TI) Bi$_2$Te$_3$, which possess strong spin-orbit coupling (SOC) and $^{d}C_{3v}$ double group symmetry. Distinct from the spin-conserv
Externí odkaz:
http://arxiv.org/abs/2406.11165
Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor
Externí odkaz:
http://arxiv.org/abs/2406.06822
Autor:
Yang, Qin, Mohammad, Meisam, Wang, Han, Payani, Ali, Kundu, Ashish, Shu, Kai, Yan, Yan, Hong, Yuan
Differentially Private Stochastic Gradient Descent (DP-SGD) and its variants have been proposed to ensure rigorous privacy for fine-tuning large-scale pre-trained language models. However, they rely heavily on the Gaussian mechanism, which may overly
Externí odkaz:
http://arxiv.org/abs/2405.18776
Autor:
Deng, Jieren, Hong, Hanbin, Palmer, Aaron, Zhou, Xin, Bi, Jinbo, Mahmood, Kaleel, Hong, Yuan, Aguiar, Derek
Randomized smoothing has become a leading method for achieving certified robustness in deep classifiers against l_{p}-norm adversarial perturbations. Current approaches for achieving certified robustness, such as data augmentation with Gaussian noise
Externí odkaz:
http://arxiv.org/abs/2405.16036
Autor:
Fu, Jie, Hong, Yuan, Ling, Xinpeng, Wang, Leixia, Ran, Xun, Sun, Zhiyu, Wang, Wendy Hui, Chen, Zhili, Cao, Yang
In recent years, privacy and security concerns in machine learning have promoted trusted federated learning to the forefront of research. Differential privacy has emerged as the de facto standard for privacy protection in federated learning due to it
Externí odkaz:
http://arxiv.org/abs/2405.08299