Distributed Bayesian optimization of deep reinforcement learning algorithms
Autor: | Ramakrishnan Kannan, M. Todd Young, Arvind Ramanathan, Jacob Hinkle |
---|---|
Rok vydání: | 2020 |
Předmět: |
Hyperparameter
Computer Networks and Communications business.industry Computer science Deep learning Bayesian optimization Supervised learning 020206 networking & telecommunications Sample (statistics) 02 engineering and technology Bayesian inference Theoretical Computer Science Artificial Intelligence Hardware and Architecture Hyperparameter optimization 0202 electrical engineering electronic engineering information engineering Reinforcement learning 020201 artificial intelligence & image processing Artificial intelligence business Algorithm Software |
Zdroj: | Journal of Parallel and Distributed Computing. 139:43-52 |
ISSN: | 0743-7315 |
DOI: | 10.1016/j.jpdc.2019.07.008 |
Popis: | Significant strides have been made in supervised learning settings thanks to the successful application of deep learning. Now, recent work has brought the techniques of deep learning to bear on sequential decision processes in the area of deep reinforcement learning (DRL). Currently, little is known regarding hyperparameter optimization for DRL algorithms. Given that DRL algorithms are computationally intensive to train, and are known to be sample inefficient, optimizing model hyperparameters for DRL presents significant challenges to established techniques. We provide an open source, distributed Bayesian model-based optimization algorithm, HyperSpace, and show that it consistently outperforms standard hyperparameter optimization techniques across three DRL algorithms. |
Databáze: | OpenAIRE |
Externí odkaz: |