Reinforcement Learning with a Gaussian mixture model
Autor: | Enric Celaya, Alejandro Agostini |
---|---|
Přispěvatelé: | Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel.ligents, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents |
Jazyk: | angličtina |
Rok vydání: | 2010 |
Předmět: |
Iterative and incremental development
Mathematical optimization Informàtica::Intel·ligència artificial::Aprenentatge automàtic [Àrees temàtiques de la UPC] Iterative method Approximation algorithm Mixture model generalisation (artificial intelligence) learning (artificial intelligence) symbols.namesake Function approximation Machine learning Aprenentatge automàtic symbols Reinforcement learning Cybernetics::Artificial intelligence::Learning (artificial intelligence) [Classificació INSPEC] Gaussian process Parametric statistics Mathematics |
Zdroj: | Digital.CSIC. Repositorio Institucional del CSIC instname Recercat. Dipósit de la Recerca de Catalunya IJCNN UPCommons. Portal del coneixement obert de la UPC Universitat Politècnica de Catalunya (UPC) |
Popis: | Trabajo presentado a la IJCNN 2010 celebrada en Barcelona del 18 al 23 de julio. Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general. This work was partially supported by the Spanish Ministry of Science and Innovation under project MIPRCV, Consolider Ingenio 2010 (CSD2007-00018). |
Databáze: | OpenAIRE |
Externí odkaz: |