Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Mhamdi, El Mahdi El"'
Autor:
Scheid, Antoine, Tiapkin, Daniil, Boursier, Etienne, Capitaine, Aymeric, Mhamdi, El Mahdi El, Moulines, Eric, Jordan, Michael I., Durmus, Alain
This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent. The principal and the agent have misaligned objectives and the choice of action is only left to the agent. Howev
Externí odkaz:
http://arxiv.org/abs/2403.03811
Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other. The outputs from a generator are mixed with the real-world inputs to the discriminator and both networks are trained until an equi
Externí odkaz:
http://arxiv.org/abs/2006.04720
Machine Learning (ML) solutions are nowadays distributed and are prone to various types of component failures, which can be encompassed in so-called Byzantine behavior. This paper introduces LiuBei, a Byzantine-resilient ML algorithm that does not tr
Externí odkaz:
http://arxiv.org/abs/1911.07537
We address the problem of correcting group discriminations within a score function, while minimizing the individual error. Each group is described by a probability density function on the set of profiles. We first solve the problem analytically in th
Externí odkaz:
http://arxiv.org/abs/1806.02510
We show that when a third party, the adversary, steps into the two-party setting (agent and operator) of safely interruptible reinforcement learning, a trade-off has to be made between the probability of following the optimal policy in the limit, and
Externí odkaz:
http://arxiv.org/abs/1805.11447
Asynchronous distributed machine learning solutions have proven very effective so far, but always assuming perfectly functioning workers. In practice, some of the workers can however exhibit Byzantine behavior, caused by hardware failures, software b
Externí odkaz:
http://arxiv.org/abs/1802.07928
While machine learning is going through an era of celebrated success, concerns have been raised about the vulnerability of its backbone: stochastic gradient descent (SGD). Recent approaches have been proposed to ensure the robustness of distributed S
Externí odkaz:
http://arxiv.org/abs/1802.07927
A standard belief on emerging collective behavior is that it emerges from simple individual rules. Most of the mathematical research on such collective behavior starts from imperative individual rules, like always go to the center. But how could an (
Externí odkaz:
http://arxiv.org/abs/1802.07834
With the development of neural networks based machine learning and their usage in mission critical applications, voices are rising against the \textit{black box} aspect of neural networks as it becomes crucial to understand their limits and capabilit
Externí odkaz:
http://arxiv.org/abs/1707.08167
Autor:
Mhamdi, El Mahdi El, Guerraoui, Rachid
We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the
Externí odkaz:
http://arxiv.org/abs/1706.08884