Zobrazeno 1 - 10
of 42
pro vyhledávání: '"GU Xinyang"'
Autor:
XUE Weiwei, SUN Diandong, HU Junrui, GU Xinyang, DU Zhaoxin, CUI Li, GUO Yan, HAO Xulong, GAO Fei
Publikováno v:
Cailiao gongcheng, Vol 52, Iss 11, Pp 83-90 (2024)
The in-situ EBSD analysis method was used to systematically study the effect of retained austenite characteristics on the phase transformation behavior of ferritic stainless steel after the quenching and partitioning (Q&P) process. The results sh
Externí odkaz:
https://doaj.org/article/6e7683d88ac34e79923c50578f580870
Autor:
Gu, Xinyang, Wang, Yen-Jen, Zhu, Xiang, Shi, Chengming, Guo, Yanjiang, Liu, Yichen, Chen, Jianyu
Humanoid robots, with their human-like skeletal structure, are especially suited for tasks in human-centric environments. However, this structure is accompanied by additional challenges in locomotion controller design, especially in complex real-worl
Externí odkaz:
http://arxiv.org/abs/2408.14472
Publikováno v:
ICRA 2024 Workshop on Agile Robotics
Humanoid-Gym is an easy-to-use reinforcement learning (RL) framework based on Nvidia Isaac Gym, designed to train locomotion skills for humanoid robots, emphasizing zero-shot transfer from simulation to the real-world environment. Humanoid-Gym also i
Externí odkaz:
http://arxiv.org/abs/2404.05695
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Deep Q Network (DQN) is a very successful algorithm, yet the inherent problem of reinforcement learning, i.e. the exploit-explore balance, remains. In this work, we introduce entropy regularization into DQN and propose SQN. We find that the backup eq
Externí odkaz:
http://arxiv.org/abs/1912.10891
Entropy regularization is an important idea in reinforcement learning, with great success in recent algorithms like Soft Q Network (SQN) and Soft Actor-Critic (SAC1). In this work, we extend this idea into the on-policy realm. We propose the soft pol
Externí odkaz:
http://arxiv.org/abs/1912.01557
Nowadays, model-free reinforcement learning algorithms have achieved remarkable performance on many decision making and control tasks, but high sample complexity and low sample efficiency still hinder the wide use of model-free reinforcement learning
Externí odkaz:
http://arxiv.org/abs/1908.11494
Publikováno v:
In Computers and Electronics in Agriculture January 2023 204
Publikováno v:
Journal of Dispersion Science & Technology; 2024, Vol. 45 Issue 9, p1793-1803, 11p
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.