Zobrazeno 1 - 10
of 154
pro vyhledávání: '"Douc, Randal"'
Autor:
Moufad, Badr, Janati, Yazid, Bedin, Lisa, Durmus, Alain, Douc, Randal, Moulines, Eric, Olsson, Jimmy
Diffusion models have recently shown considerable potential in solving Bayesian inverse problems when used as priors. However, sampling from the resulting denoising posterior distributions remains a challenge as it involves intractable terms. To tack
Externí odkaz:
http://arxiv.org/abs/2410.09945
We consider a state-space model (SSM) parametrized by some parameter $\theta$ and aim at performing joint parameter and state inference. A popular idea to carry out this task is to replace $\theta$ by a Markov chain $(\theta_t)_{t\geq 0}$ and then to
Externí odkaz:
http://arxiv.org/abs/2409.08928
Autor:
Douc, Randal, Corff, Sylvain Le
This paper introduces a general framework for iterative optimization algorithms and establishes under general assumptions that their convergence is asymptotically geometric. We also prove that under appropriate assumptions, the rate of convergence ca
Externí odkaz:
http://arxiv.org/abs/2302.12544
The Importance Markov chain is a novel algorithm bridging the gap between rejection sampling and importance sampling, moving from one to the other through a tuning parameter. Based on a modified sample of an instrumental Markov chain targeting an ins
Externí odkaz:
http://arxiv.org/abs/2207.08271
This article shows how coupled Markov chains that meet exactly after a random number of iterations can be used to generate unbiased estimators of the solutions of the Poisson equation. We establish connections to recently-proposed unbiased estimators
Externí odkaz:
http://arxiv.org/abs/2206.05691
The present paper focuses on the problem of sampling from a given target distribution $\pi$ defined on some general state space. To this end, we introduce a novel class of non-reversible Markov chains, each chain being defined on an extended state sp
Externí odkaz:
http://arxiv.org/abs/2201.05002
Publikováno v:
In Stochastic Processes and their Applications May 2024 171
Autor:
Daudel, Kamélia, Douc, Randal
This paper focuses on $\alpha$-divergence minimisation methods for Variational Inference. More precisely, we are interested in algorithms optimising the mixture weights of any given mixture model, without any information on the underlying distributio
Externí odkaz:
http://arxiv.org/abs/2106.05114
The class of observation-driven models (ODMs) includes many models of non-linear time series which, in a fashion similar to, yet different from, hidden Markov models (HMMs), involve hidden variables. Interestingly, in contrast to most HMMs, ODMs enjo
Externí odkaz:
http://arxiv.org/abs/2106.05201