Hyperparameter Auto-tuning in Self-Supervised Robotic Learning

Autor: Huang, Jiancong, Rojas, Juan, Zimmer, Matthieu, Wu, Hongmin, Guan, Yisheng, Weng, Paul
Rok vydání: 2020
Předmět:
Zdroj: IEEE Robotics and Automation Letters, Volume:6, Issue:2, P. 3537-3544, April 2021
Druh dokumentu: Working Paper
DOI: 10.1109/LRA.2021.3064509
Popis: Policy optimization in reinforcement learning requires the selection of numerous hyperparameters across different environments. Fixing them incorrectly may negatively impact optimization performance leading notably to insufficient or redundant learning. Insufficient learning (due to convergence to local optima) results in under-performing policies whilst redundant learning wastes time and resources. The effects are further exacerbated when using single policies to solve multi-task learning problems. Observing that the Evidence Lower Bound (ELBO) used in Variational Auto-Encoders correlates with the diversity of image samples, we propose an auto-tuning technique based on the ELBO for self-supervised reinforcement learning. Our approach can auto-tune three hyperparameters: the replay buffer size, the number of policy gradient updates during each epoch, and the number of exploration steps during each epoch. We use a state-of-the-art self-supervised robot learning framework (Reinforcement Learning with Imagined Goals (RIG) using Soft Actor-Critic) as baseline for experimental verification. Experiments show that our method can auto-tune online and yields the best performance at a fraction of the time and computational resources. Code, video, and appendix for simulated and real-robot experiments can be found at the project page \url{www.JuanRojas.net/autotune}.
Comment: 8 pages, 6 figures, Published in IEEE Robotics and Automation Letters; Presented at The 2021 International Conference on Robotics and Automation (ICRA 2021); Presented at Deep RL Workshop, NeurIPS 2020
Databáze: arXiv