A parallel implementation of Q-learning based on communication with cache

Autor: Alicia Marcela Printista, Marcelo Luis Errecalde, Cecilia Inés Montoya
Jazyk: angličtina
Rok vydání: 2002
Předmět:
Zdroj: Journal of Computer Science and Technology, Vol 1, Iss 06, Pp 11 p.-11 p. (2002)
Druh dokumentu: article
ISSN: 1666-6046
1666-6038
Popis: Q-Learning is a Reinforcement Learning method for solving sequential decision problems, where the utility of actions depends on a sequence of decisions and there exists uncertainty about the dynamics of the environment the agent is situated on. This general framework has allowed that Q-Learning and other Reinforcement Learning methods to be applied to a broad spectrum of complex real world problems such as robotics, industrial manufacturing, games and others. Despite its interesting properties, Q-learning is a very slow method that requires a long period of training for learning an acceptable policy. In order to solve or at least reduce this problem, we propose a parallel implementation model of Q-learning using a tabular representation and via a communication scheme based on cache. This model is applied to a particular problem and the results obtained with different processor configurations are reported. A brief discussion about the properties and current limitations of our approach is finally presented.
Databáze: Directory of Open Access Journals