Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Zang, Hongyu"'
Publikováno v:
ICML 2024
Visual Model-Based Reinforcement Learning (MBRL) promises to encapsulate agent's knowledge about the underlying dynamics of the environment, enabling learning a world model as a useful planner. However, top MBRL agents such as Dreamer often struggle
Externí odkaz:
http://arxiv.org/abs/2405.06263
Autor:
Zang, Hongyu, Li, Xin, Zhang, Leiji, Liu, Yang, Sun, Baigui, Islam, Riashat, Combes, Remi Tachet des, Laroche, Romain
While bisimulation-based approaches hold promise for learning robust state representations for Reinforcement Learning (RL) tasks, their efficacy in offline RL tasks has not been up to par. In some instances, their performance has even significantly u
Externí odkaz:
http://arxiv.org/abs/2310.17139
Autor:
Liu, Chen, Zang, Hongyu, Li, Xin, Heng, Yong, Wang, Yifei, Fang, Zhen, Wang, Yisen, Wang, Mingzhong
Image-based Reinforcement Learning is a practical yet challenging task. A major hurdle lies in extracting control-centric representations while disregarding irrelevant information. While approaches that follow the bisimulation principle exhibit the p
Externí odkaz:
http://arxiv.org/abs/2310.16655
Autor:
Islam, Riashat, Zang, Hongyu, Tomar, Manan, Didolkar, Aniket, Islam, Md Mofijul, Arnob, Samin Yeasar, Iqbal, Tariq, Li, Xin, Goyal, Anirudh, Heess, Nicolas, Lamb, Alex
Several self-supervised representation learning methods have been proposed for reinforcement learning (RL) with rich observations. For real-world applications of RL, recovering underlying latent states is crucial, particularly when sensory inputs con
Externí odkaz:
http://arxiv.org/abs/2212.13835
Autor:
Zang, Hongyu, Li, Xin, Yu, Jie, Liu, Chen, Islam, Riashat, Combes, Remi Tachet Des, Laroche, Romain
Offline reinforcement learning (RL) struggles in environments with rich and noisy inputs, where the agent only has access to a fixed dataset without environment interactions. Past works have proposed common workarounds based on the pre-training of st
Externí odkaz:
http://arxiv.org/abs/2211.00863
Autor:
Islam, Riashat, Zang, Hongyu, Goyal, Anirudh, Lamb, Alex, Kawaguchi, Kenji, Li, Xin, Laroche, Romain, Bengio, Yoshua, Combes, Remi Tachet Des
Goal-conditioned reinforcement learning (RL) is a promising direction for training agents that are capable of solving multiple tasks and reach a diverse set of objectives. How to \textit{specify} and \textit{ground} these goals in such a way that we
Externí odkaz:
http://arxiv.org/abs/2211.00247
Autor:
Islam, Riashat, Tomar, Manan, Lamb, Alex, Efroni, Yonathan, Zang, Hongyu, Didolkar, Aniket, Misra, Dipendra, Li, Xin, van Seijen, Harm, Combes, Remi Tachet des, Langford, John
Learning to control an agent from data collected offline in a rich pixel-based visual observation space is vital for real-world applications of reinforcement learning (RL). A major challenge in this setting is the presence of input information that i
Externí odkaz:
http://arxiv.org/abs/2211.00164
This work explores how to learn robust and generalizable state representation from image-based observations with deep reinforcement learning methods. Addressing the computational complexity, stringent assumptions and representation collapse challenge
Externí odkaz:
http://arxiv.org/abs/2112.15303
Autor:
Yang, Baoyan, Zhao, Xiaoyue, Wang, Ting, Zhong, Zhuzhu, Zhang, Yan, Su, Shaoqing, Wang, Junyi, Zhu, Mengmeng, Zang, Hongyu
Publikováno v:
In Acta Psychologica November 2023 241
In this paper, we explore a new approach for automated chess commentary generation, which aims to generate chess commentary texts in different categories (e.g., description, comparison, planning, etc.). We introduce a neural chess engine into text ge
Externí odkaz:
http://arxiv.org/abs/1909.10413