Dynamic Content Update for Wireless Edge Caching via Deep Reinforcement Learning
Autor: | Kui Cai, Long Shi, Fuli Yang, Pingyang Wu, Ming Ding, Jun Li |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Hardware_MEMORYSTRUCTURES Computer science business.industry Information Theory (cs.IT) Computer Science - Information Theory 020206 networking & telecommunications 02 engineering and technology Computer Science Applications Modeling and Simulation Server 0202 electrical engineering electronic engineering information engineering Reinforcement learning Markov decision process Cache Electrical and Electronic Engineering business Cache algorithms Auxiliary memory Computer network |
Zdroj: | IEEE Communications Letters. 23:1773-1777 |
ISSN: | 2373-7891 1089-7798 |
DOI: | 10.1109/lcomm.2019.2931688 |
Popis: | This letter studies a basic wireless caching network where a source server is connected to a cache-enabled base station (BS) that serves multiple requesting users. A critical problem is how to improve cache hit rate under dynamic content popularity. To solve this problem, the primary contribution of this work is to develop a novel dynamic content update strategy with the aid of deep reinforcement learning. Considering that the BS is unaware of content popularities, the proposed strategy dynamically updates the BS cache according to the time-varying requests and the BS cached contents. Towards this end, we model the problem of cache update as a Markov decision process and put forth an efficient algorithm that builds upon the long short-term memory network and external memory to enhance the decision making ability of the BS. Simulation results show that the proposed algorithm can achieve not only a higher average reward than deep Q-network, but also a higher cache hit rate than the existing replacement policies such as the least recently used, first-in first-out, and deep Q-network based algorithms. Accepted by IEEE CL |
Databáze: | OpenAIRE |
Externí odkaz: |