Direct Value Learning: a Preference-based Approach to Reinforcement Learning

Autor: Meunier, David, Deguchi, Yutaka, Akrour, Riad, Suzuki, Enoshin, Schoenauer, Marc, Sebag, Michèle
Přispěvatelé: Laboratoire de Recherche en Informatique (LRI), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Machine Learning and Optimisation (TAO), Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Paris-Sud - Paris 11 (UP11)-Laboratoire de Recherche en Informatique (LRI), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-CentraleSupélec, Dept. Informatics, ISEE, Kyushu University [Fukuoka], Johannes Fürnkranz and Eyke Hüllermeier, Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Kyushu University, Schoenauer, Marc
Jazyk: angličtina
Rok vydání: 2012
Předmět:
Zdroj: ECAI-12 Workshop on Preference Learning: Problems and Applications in AI
ECAI-12 Workshop on Preference Learning: Problems and Applications in AI, Aug 2012, Montpellier, France. pp.42-47
www2.lirmm.fr/ecai2012/images/stories/ecai_doc/pdf/workshop/W30_PL12-Proceedings.pdf
Popis: International audience; Learning by imitation, among the most promising techniques for reinforcement learning in complex domains, critically depends on the human designer ability to provide sufficiently many demonstrations of satisfactory quality. The approach presented in this paper, referred to as DIVA (Direct Value Learning for Reinforcement Learning), aims at addressing both above limitations by exploiting simple experiments. The approach stems from a straightforward remark: while it is rather easy to set a robot in a target situation, the quality of its situation will naturally deteriorate upon the action of naive controllers. The demonstration of such naive controllers can thus be used to learn directly a value function, through a preference learning approach. Under some conditions on the transition model, this value function enables to define an optimal controller. The DIVA approach is experimentally demonstrated by teaching a robot to follow another robot. Importantly, the approach does not require any robotic simulator to be available, nor does it require any pattern-recognition primitive (e.g. seeing the other robot) to be provided.
Databáze: OpenAIRE