Game Theory and Multi-agent Reinforcement Learning

Autor: Yann-Michaël De Hauwere, Peter Vrancx, Ann Nowé
Přispěvatelé: Wiering, M., Otterlo, M. Van, Computational Modelling
Jazyk: angličtina
Rok vydání: 2012
Předmět:
Zdroj: Adaptation, Learning, and Optimization ISBN: 9783642276446
Reinforcement Learning
Vrije Universiteit Brussel
Popis: Reinforcement Learning was originally developed for Markov Decision Processes (MDPs). It allows a single agent to learn a policy that maximizes a possibly delayed reward signal in a stochastic stationary environment. It guarantees convergence to the optimal policy, provided that the agent can sufficiently experiment and the environment in which it is operating is Markovian. However, when multiple agents apply reinforcement learning in a shared environment, this might be beyond the MDP model. In such systems, the optimal policy of an agent depends not only on the environment, but on the policies of the other agents as well. These situations arise naturally in a variety of domains, such as: robotics, telecommunications, economics, distributed control, auctions, traffic light control, etc. In these domains multi-agent learning is used, either because of the complexity of the domain or because control is inherently decentralized. In such systems it is important that agents are capable of discovering good solutions to the problem at hand either by coordinating with other learners or by competing with them. This chapter focuses on the application reinforcement learning techniques in multi-agent systems. We describe a basic learning framework based on the economic research into game theory, and illustrate the additional complexity that arises in such systems. We also described a representative selection of algorithms for the different areas of multi-agent reinforcement learning research.
Databáze: OpenAIRE