Zobrazeno 1 - 10
of 88
pro vyhledávání: '"Nguyen, Phuong Ha"'
Unreliable XOR Arbiter PUFs were broken by a machine learning attack, which targets the underlying Arbiter PUFs individually. However, reliability information from the PUF was required for this attack. We show that, for the first time, a perfectly re
Externí odkaz:
http://arxiv.org/abs/2312.01256
Each round in Differential Private Stochastic Gradient Descent (DPSGD) transmits a sum of clipped gradients obfuscated with Gaussian noise to a central server which uses this to update a global model which often represents a deep neural network. Sinc
Externí odkaz:
http://arxiv.org/abs/2307.11939
Autor:
van Dijk, Marten, Nguyen, Phuong Ha
In federated learning collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used, in particular, how can each client's local training data remain private? Differential privacy is
Externí odkaz:
http://arxiv.org/abs/2303.04676
Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach. We provide a general differential private algorithmic framework that goes beyond DP-SGD and allows any possible firs
Externí odkaz:
http://arxiv.org/abs/2212.05796
Recent years have witnessed a trend of secure processor design in both academia and industry. Secure processors with hardware-enforced isolation can be a solid foundation of cloud computation in the future. However, due to recent side-channel attacks
Externí odkaz:
http://arxiv.org/abs/2201.01834
We introduce a multiple target optimization framework for DP-SGD referred to as pro-active DP. In contrast to traditional DP accountants, which are used to track the expenditure of privacy budgets, the pro-active DP scheme allows one to a-priori sele
Externí odkaz:
http://arxiv.org/abs/2102.09030
Autor:
van Dijk, Marten, Nguyen, Nhuong V., Nguyen, Toan N., Nguyen, Lam M., Tran-Dinh, Quoc, Nguyen, Phuong Ha
Hogwild! implements asynchronous Stochastic Gradient Descent (SGD) where multiple threads in parallel access a common repository containing training data, perform SGD iterations and update shared state that represents a jointly learned (global) model
Externí odkaz:
http://arxiv.org/abs/2010.14763
Autor:
van Dijk, Marten, Nguyen, Nhuong V., Nguyen, Toan N., Nguyen, Lam M., Tran-Dinh, Quoc, Nguyen, Phuong Ha
The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication. Most newly launched smartphones and IoT devices are equipped with GPUs or sufficient computing hardware to run powerfu
Externí odkaz:
http://arxiv.org/abs/2007.09208
Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do not properly examine black-box attacks. In this paper, we expand upon the analysis of these
Externí odkaz:
http://arxiv.org/abs/2006.10876
Autor:
Pham, Nhan H., Nguyen, Lam M., Phan, Dzung T., Nguyen, Phuong Ha, van Dijk, Marten, Tran-Dinh, Quoc
Publikováno v:
Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR 108:374-385, 2020
We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estima
Externí odkaz:
http://arxiv.org/abs/2003.00430