Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling

Autor: Huang, Sili, Hu, Jifeng, Yang, Zhejian, Yang, Liwei, Luo, Tao, Chen, Hechang, Sun, Lichao, Yang, Bo
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Recent works have shown the remarkable superiority of transformer models in reinforcement learning (RL), where the decision-making problem is formulated as sequential generation. Transformer-based agents could emerge with self-improvement in online environments by providing task contexts, such as multiple trajectories, called in-context RL. However, due to the quadratic computation complexity of attention in transformers, current in-context RL methods suffer from huge computational costs as the task horizon increases. In contrast, the Mamba model is renowned for its efficient ability to process long-term dependencies, which provides an opportunity for in-context RL to solve tasks that require long-term memory. To this end, we first implement Decision Mamba (DM) by replacing the backbone of Decision Transformer (DT). Then, we propose a Decision Mamba-Hybrid (DM-H) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Specifically, DM-H first generates high-value sub-goals from long-term memory through the Mamba model. Then, we use sub-goals to prompt the transformer, establishing high-quality predictions. Experimental results demonstrate that DM-H achieves state-of-the-art in long and short-term tasks, such as D4RL, Grid World, and Tmaze benchmarks. Regarding efficiency, the online testing of DM-H in the long-term task is 28$\times$ times faster than the transformer-based baselines.
Comment: arXiv admin note: text overlap with arXiv:2405.20692. arXiv admin note: text overlap with arXiv:2405.20692; text overlap with arXiv:2305.16554, arXiv:2210.14215 by other authors
Databáze: arXiv