Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Genevay, Aude"'
Autor:
Korotin, Alexander, Li, Lingxiao, Genevay, Aude, Solomon, Justin, Filippov, Alexander, Burnaev, Evgeny
Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport -- specifically, computat
Externí odkaz:
http://arxiv.org/abs/2106.01954
Autor:
Mokrov, Petr, Korotin, Alexander, Li, Lingxiao, Genevay, Aude, Solomon, Justin, Burnaev, Evgeny
Wasserstein gradient flows provide a powerful means of understanding and solving many diffusion equations. Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy fu
Externí odkaz:
http://arxiv.org/abs/2106.00736
Autor:
Genevay, Aude
Le Transport Optimal régularisé par l’Entropie (TOE) permet de définir les Divergences de Sinkhorn (DS), une nouvelle classe de distance entre mesures de probabilités basées sur le TOE. Celles-ci permettentd’interpolerentredeuxautresdistanc
Externí odkaz:
http://www.theses.fr/2019PSLED002/document
Publikováno v:
PMLR 161:290-300, 2021
Optimal transport (OT) is a popular tool in machine learning to compare probability measures geometrically, but it comes with substantial computational burden. Linear programming algorithms for computing OT distances scale cubically in the size of th
Externí odkaz:
http://arxiv.org/abs/2102.12731
Wasserstein barycenters provide a geometrically meaningful way to aggregate probability distributions, built on the theory of optimal transport. They are difficult to compute in practice, however, leading previous work to restrict their supports to f
Externí odkaz:
http://arxiv.org/abs/2008.12534
Clustering is a fundamental unsupervised learning approach. Many clustering algorithms -- such as $k$-means -- rely on the euclidean distance as a similarity measure, which is often not the most relevant metric for high dimensional data such as image
Externí odkaz:
http://arxiv.org/abs/1910.09036
Optimal transport (OT) and maximum mean discrepancies (MMD) are now routinely used in machine learning to compare probability measures. We focus in this paper on \emph{Sinkhorn divergences} (SDs), a regularized variant of OT distances which can inter
Externí odkaz:
http://arxiv.org/abs/1810.02733
The proliferation of large data sets and Bayesian inference techniques motivates demand for better data sparsification. Coresets provide a principled way of summarizing a large dataset via a smaller one that is guaranteed to match the performance of
Externí odkaz:
http://arxiv.org/abs/1805.07412
This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Ka
Externí odkaz:
http://arxiv.org/abs/1706.01807
The ability to compare two degenerate probability distributions (i.e. two probability distributions supported on two distinct low-dimensional manifolds living in a much higher-dimensional space) is a crucial problem arising in the estimation of gener
Externí odkaz:
http://arxiv.org/abs/1706.00292