Zobrazeno 1 - 10
of 315
pro vyhledávání: '"Pan Zhisong"'
Federated learning (FL) is a widely employed distributed paradigm for collaboratively training machine learning models from multiple clients without sharing local data. In practice, FL encounters challenges in dealing with partial client participatio
Externí odkaz:
http://arxiv.org/abs/2310.05495
Due to its simplicity and efficiency, the first-order gradient method has been extensively employed in training neural networks. Although the optimization problem of the neural network is non-convex, recent research has proved that the first-order me
Externí odkaz:
http://arxiv.org/abs/2208.03941
A covert attack method often used by APT organizations is the DNS tunnel, which is used to pass information by constructing C2 networks. And they often use the method of frequently changing domain names and server IP addresses to evade monitoring, wh
Externí odkaz:
http://arxiv.org/abs/2207.06641
Autor:
Ma, Xin, Bao, Renyi, Jiang, Jinpeng, Liu, Yang, Jiang, Arthur, Yan, Jun, Liu, Xin, Pan, Zhisong
In this work, we propose FedSSO, a server-side second-order optimization method for federated learning (FL). In contrast to previous works in this direction, we employ a server-side approximation for the Quasi-Newton method without requiring any trai
Externí odkaz:
http://arxiv.org/abs/2206.09576
The existing network attack and defense method can be regarded as game, but most of the game only involves network domain, not multiple domain cyberspace. To address this challenge, this paper proposed a multiple domain cyberspace attack and defense
Externí odkaz:
http://arxiv.org/abs/2205.10990
In general, multiple domain cyberspace security assessments can be implemented by reasoning user's permissions. However, while existing methods include some information from the physical and social domains, they do not provide a comprehensive represe
Externí odkaz:
http://arxiv.org/abs/2205.07502
Momentum methods, including heavy-ball~(HB) and Nesterov's accelerated gradient~(NAG), are widely used in training neural networks for their fast convergence. However, there is a lack of theoretical guarantees for their convergence and acceleration s
Externí odkaz:
http://arxiv.org/abs/2204.08306
Publikováno v:
In Knowledge-Based Systems 9 October 2024 301
Deep neural networks are vulnerable to adversarial examples, which can fool deep models by adding subtle perturbations. Although existing attacks have achieved promising results, it still leaves a long way to go for generating transferable adversaria
Externí odkaz:
http://arxiv.org/abs/2201.00097
We introduce a three stage pipeline: resized-diverse-inputs (RDIM), diversity-ensemble (DEM) and region fitting, that work together to generate transferable adversarial examples. We first explore the internal relationship between existing attacks, an
Externí odkaz:
http://arxiv.org/abs/2112.06011