Intrinsic convergence properties of entropic sampling algorithms

Autor: Belardinelli, Rolando Elio, Pereyra, Victor Daniel, Dickman, Ronald, Lourenco, Bruno Jeferson
Rok vydání: 2014
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1088/1742-5468/2014/07/P07007
Popis: We study the convergence of the density of states and thermodynamic properties in three flat-histogram simulation methods, the Wang-Landau (WL) algorithm, the 1/t algorithm, and tomographic sampling (TS). In the first case the refinement parameter f is rescaled (f -> f/2) each time the flat-histogram condition is satisfied, in the second f ~ 1/t after a suitable initial phase, while in the third f is constant (t corresponds to Monte Carlo time). To examine the intrinsic convergence properties of these methods, free of any complications associated with a specific model, we study a featureless entropy landscape, such that for each allowed energy E = 1,...,L, there is exactly one state, that is, g(E) = 1 for all E. Convergence of sampling corresponds to g(E,t) -> const. as t -> infinity, so that the standard deviation sigma_g of g over energy values is a measure of the overall sampling error. Neither the WL algorithm nor TS converge: in both cases sigma_g saturates at long times. In the 1/t algorithm, by contrast, sigma_g decays \propto 1/\sqrt{t}. Modified TS and 1/t procedures, in which f \propto 1/t^alpha, converge for alpha values between 0 and 1. There are two essential facets to convergence of flat-histogram methods: elimination of initial errors in g(E), and correction of the sampling noise accumulated during the process. For a simple example, we demonstrate analytically, using a Langevin equation, that both kinds of errors can be eliminated, asymptotically, if f ~ 1/t^alpha with 0 < \alpha \leq 1. Convergence is optimal for alpha = 1. For alpha \leq 0 the sampling noise never decays, while for alpha > 1 the initial error is never completely eliminated.
Comment: 17 pages, 6 figures
Databáze: arXiv