Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Patrini, Giorgio"'
We present SEALion: an extensible framework for privacy-preserving machine learning with homomorphic encryption. It allows one to learn deep neural networks that can be seamlessly utilized for prediction on encrypted data. The framework consists of t
Externí odkaz:
http://arxiv.org/abs/1904.12840
Differentially private learning on real-world data poses challenges for standard machine learning practice: privacy guarantees are difficult to interpret, hyperparameter tuning on private data reduces the privacy budget, and ad-hoc privacy attacks ar
Externí odkaz:
http://arxiv.org/abs/1812.02890
Autor:
Patrini, Giorgio, Berg, Rianne van den, Forré, Patrick, Carioni, Marcello, Bhargav, Samarth, Welling, Max, Genewein, Tim, Nielsen, Frank
Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-
Externí odkaz:
http://arxiv.org/abs/1810.01118
Autor:
Nock, Richard, Hardy, Stephen, Henecka, Wilko, Ivey-Law, Hamish, Patrini, Giorgio, Smith, Guillaume, Thorne, Brian
Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a cru
Externí odkaz:
http://arxiv.org/abs/1803.04035
Autor:
Hardy, Stephen, Henecka, Wilko, Ivey-Law, Hamish, Nock, Richard, Patrini, Giorgio, Smith, Guillaume, Thorne, Brian
Consider two data providers, each maintaining private records of different feature sets about common entities. They aim to learn a linear model jointly in a federated setting, namely, data is local and a shared model is trained from locally computed
Externí odkaz:
http://arxiv.org/abs/1711.10677
Optimal transport is a powerful framework for computing distances between probability distributions. We unify the two main approaches to optimal transport, namely Monge-Kantorovitch and Sinkhorn-Cuturi, into what we define as Tsallis regularized opti
Externí odkaz:
http://arxiv.org/abs/1609.04495
We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network a
Externí odkaz:
http://arxiv.org/abs/1609.03683
It is usual to consider data protection and learnability as conflicting objectives. This is not always the case: we show how to jointly control inference --- seen as the attack --- and learnability by a noise-free process that mixes training examples
Externí odkaz:
http://arxiv.org/abs/1606.04160
Consider the following data fusion scenario: two datasets/peers contain the same real-world entities described using partially shared features, e.g. banking and insurance company records of the same customer base. Our goal is to learn a classifier in
Externí odkaz:
http://arxiv.org/abs/1603.04002
We prove that the empirical risk of most well-known loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the loss. This holds true even for non-smooth, non-convex los
Externí odkaz:
http://arxiv.org/abs/1602.02450