Zobrazeno 1 - 10
of 45
pro vyhledávání: '"Majka, Mateusz B."'
Publikováno v:
Transactions on Machine Learning Research (2024)
Gradient flows play a substantial role in addressing many machine learning problems. We examine the convergence in continuous-time of a \textit{Fisher-Rao} (Mean-Field Birth-Death) gradient flow in the context of solving convex-concave min-max games
Externí odkaz:
http://arxiv.org/abs/2405.15834
We study two variants of the mirror descent-ascent algorithm for solving min-max problems on the space of measures: simultaneous and sequential. We work under assumptions of convexity-concavity and relative smoothness of the payoff function with resp
Externí odkaz:
http://arxiv.org/abs/2402.08106
We show the $L^2$-Wasserstein contraction for the transition kernel of a discretised diffusion process, under a contractivity at infinity condition on the drift and a sufficiently high diffusivity requirement. This extends recent results that, under
Externí odkaz:
http://arxiv.org/abs/2310.15897
We investigate the convergence properties of a continuous-time optimization method, the \textit{Mean-Field Best Response} flow, for solving convex-concave min-max games with entropy regularization. We introduce suitable Lyapunov functions to establis
Externí odkaz:
http://arxiv.org/abs/2306.03033
We study optimal Markovian couplings of Markov processes, where the optimality is understood in terms of minimization of concave transport costs between the time-marginal distributions of the coupled processes. We provide explicit constructions of su
Externí odkaz:
http://arxiv.org/abs/2210.11251
The Polyak-Lojasiewicz inequality (PLI) in $\mathbb{R}^d$ is a natural condition for proving convergence of gradient descent algorithms. In the present paper, we study an analogue of PLI on the space of probability measures $\mathcal{P}(\mathbb{R}^d)
Externí odkaz:
http://arxiv.org/abs/2206.02774
Publikováno v:
In Stochastic Processes and their Applications January 2025 179
We study contractions of Markov chains on general metric spaces with respect to some carefully designed distance-like functions, which are comparable to the total variation and the standard $L^p$-Wasserstein distances for $p \ge 1$. We present explic
Externí odkaz:
http://arxiv.org/abs/2109.00694
Constructions of numerous approximate sampling algorithms are based on the well-known fact that certain Gibbs measures are stationary distributions of ergodic stochastic differential equations (SDEs) driven by the Brownian motion. However, for some h
Externí odkaz:
http://arxiv.org/abs/2007.02212
Stochastic Gradient Algorithms (SGAs) are ubiquitous in computational statistics, machine learning and optimisation. Recent years have brought an influx of interest in SGAs, and the non-asymptotic analysis of their bias is by now well-developed. Howe
Externí odkaz:
http://arxiv.org/abs/2006.06102