Autor: |
Terence, Ng Wen Zheng, Jianda, Chen |
Rok vydání: |
2024 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
DOI: |
10.1007/978-3-031-72341-4 |
Popis: |
This paper presents Dual Action Policy (DAP), a novel approach to address the dynamics mismatch inherent in the sim-to-real gap of reinforcement learning. DAP uses a single policy to predict two sets of actions: one for maximizing task rewards in simulation and another specifically for domain adaptation via reward adjustments. This decoupling makes it easier to maximize the overall reward in the source domain during training. Additionally, DAP incorporates uncertainty-based exploration during training to enhance agent robustness. Experimental results demonstrate DAP's effectiveness in bridging the sim-to-real gap, outperforming baselines on challenging tasks in simulation, and further improvement is achieved by incorporating uncertainty estimation. |
Databáze: |
arXiv |
Externí odkaz: |
|