Zobrazeno 1 - 10
of 82
pro vyhledávání: '"Tommasi, Marc"'
Authorship obfuscation aims to disguise the identity of an author within a text by altering the writing style, vocabulary, syntax, and other linguistic features associated with the text author. This alteration needs to balance privacy and utility. Wh
Externí odkaz:
http://arxiv.org/abs/2407.21630
R\'enyi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration
Pufferfish privacy is a flexible generalization of differential privacy that allows to model arbitrary secrets and adversary's prior knowledge about the data. Unfortunately, designing general and tractable Pufferfish mechanisms that do not compromise
Externí odkaz:
http://arxiv.org/abs/2312.13985
This paper presents a new generalization error analysis for Decentralized Stochastic Gradient Descent (D-SGD) based on algorithmic stability. The obtained results overhaul a series of recent works that suggested an increased instability due to decent
Externí odkaz:
http://arxiv.org/abs/2306.02939
We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This resu
Externí odkaz:
http://arxiv.org/abs/2210.16242
Autor:
Terrail, Jean Ogier du, Ayed, Samy-Safwan, Cyffers, Edwige, Grimberg, Felix, He, Chaoyang, Loeb, Regis, Mangold, Paul, Marchand, Tanguy, Marfoq, Othmane, Mushtaq, Erum, Muzellec, Boris, Philippenko, Constantin, Silva, Santiago, Teleńczuk, Maria, Albarqouni, Shadi, Avestimehr, Salman, Bellet, Aurélien, Dieuleveut, Aymeric, Jaggi, Martin, Karimireddy, Sai Praneeth, Lorenzi, Marco, Neglia, Giovanni, Tommasi, Marc, Andreux, Mathieu
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models, without centralizing data. The cross-silo FL setting corresponds to the case of few ($2$--$50$) reliable cli
Externí odkaz:
http://arxiv.org/abs/2210.04620
We consider an online estimation problem involving a set of agents. Each agent has access to a (personal) process that generates samples from a real-valued distribution and seeks to estimate its mean. We study the case where some of the distributions
Externí odkaz:
http://arxiv.org/abs/2208.11530
In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine
Externí odkaz:
http://arxiv.org/abs/2207.01560
One of the key challenges in decentralized and federated learning is to design algorithms that efficiently deal with highly heterogeneous data distributions across agents. In this paper, we revisit the analysis of the popular Decentralized Stochastic
Externí odkaz:
http://arxiv.org/abs/2204.04452
Autor:
Shamsabadi, Ali Shahin, Srivastava, Brij Mohan Lal, Bellet, Aurélien, Vauquier, Nathalie, Vincent, Emmanuel, Maouche, Mohamed, Tommasi, Marc, Papernot, Nicolas
Sharing real-world speech utterances is key to the training and deployment of voice-based services. However, it also raises privacy risks as speech contains a wealth of personal data. Speaker anonymization aims to remove speaker information from a sp
Externí odkaz:
http://arxiv.org/abs/2202.11823
Autor:
Mdhaffar, Salima, Bonastre, Jean-François, Tommasi, Marc, Tomashenko, Natalia, Estève, Yannick
The widespread of powerful personal devices capable of collecting voice of their users has opened the opportunity to build speaker adapted speech recognition system (ASR) or to participate to collaborative learning of ASR. In both cases, personalized
Externí odkaz:
http://arxiv.org/abs/2111.04194