Large Deviations for Empirical Measures of Self-Interacting Markov Chains
Autor: | Budhiraja, Amarjit, Waterbury, Adam, Zoubouloglou, Pavlos |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Let $\Delta^o$ be a finite set and, for each probability measure $m$ on $\Delta^o$, let $G(m)$ be a transition probability kernel on $\Delta^o$. Fix $x_0 \in \Delta^o$ and consider the chain $\{X_n, \; n \in \mathbb{N}_0\}$ of $\Delta^o$-valued random variables such that $X_0=x$, and given $X_0, \ldots , X_n$, the conditional distribution of $X_{n+1}$ is $G(L^{n+1})(X_n, \cdot)$, where $L^{n+1} = \frac{1}{n+1} \sum_{i=0}^{n} \delta_{X_i}$ is the empirical measure at instant $n$. Under conditions on $G$ we establish a large deviation principle for the empirical measure sequence $\{L^n, \; n \in \mathbb{N}\}$. As one application of this result we obtain large deviation asymptotics for the Aldous-Flannery-Palacios (1988) approximation scheme for quasistationary distributions of irreducible finite state Markov chains. The conditions on $G$ cover various other models of reinforced stochastic evolutions as well, including certain vertex reinforced and edge reinforced random walks and a variant of the PageRank algorithm. The particular case where $G(m)$ does not depend on $m$ corresponds to the classical results of Donsker and Varadhan (1975) on large deviations of empirical measures of Markov processes. However, unlike this classical setting, for the general self-interacting models considered here, the rate function takes a very different form; it is typically non-convex and is given through a dynamical variational formula with an infinite horizon discounted objective function. Comment: 54 pages |
Databáze: | arXiv |
Externí odkaz: |