Zobrazeno 1 - 10
of 58
pro vyhledávání: '"Xiong, Zikang"'
Autor:
Wu, Yi, Xiong, Zikang, Hu, Yiran, Iyengar, Shreyash S., Jiang, Nan, Bera, Aniket, Tan, Lin, Jagannathan, Suresh
Despite significant advancements in large language models (LLMs) that enhance robot agents' understanding and execution of natural language (NL) commands, ensuring the agents adhere to user-specified constraints remains challenging, particularly for
Externí odkaz:
http://arxiv.org/abs/2409.19471
Autor:
Feng, Shiwei, Chen, Xuan, Cheng, Zhiyuan, Xiong, Zikang, Gao, Yifei, Cheng, Siyuan, Kate, Sayali, Zhang, Xiangyu
Robot navigation is increasingly crucial across applications like delivery services and warehouse management. The integration of Reinforcement Learning (RL) with classical planning has given rise to meta-planners that combine the adaptability of RL w
Externí odkaz:
http://arxiv.org/abs/2409.10832
Autor:
Xiong, Zikang, Jagannathan, Suresh
Data-driven neural path planners are attracting increasing interest in the robotics community. However, their neural network components typically come as black boxes, obscuring their underlying decision-making processes. Their black-box nature expose
Externí odkaz:
http://arxiv.org/abs/2403.18256
Synthesizing planning and control policies in robotics is a fundamental task, further complicated by factors such as complex logic specifications and high-dimensional robot dynamics. This paper presents a novel reinforcement learning approach to solv
Externí odkaz:
http://arxiv.org/abs/2303.01346
Neural network policies trained using Deep Reinforcement Learning (DRL) are well-known to be susceptible to adversarial attacks. In this paper, we consider attacks manifesting as perturbations in the observation space managed by the external environm
Externí odkaz:
http://arxiv.org/abs/2206.07188
Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated promising results on various challenging non-linear control tasks. While a model-free DRL algorithm can solve unknown dynamics and high-dimensional problems, it lacks safety a
Externí odkaz:
http://arxiv.org/abs/2203.01190
Autor:
Xiong, Zikang, Jagannathan, Suresh
There has been significant recent interest in devising verification techniques for learning-enabled controllers (LECs) that manage safety-critical systems. Given the opacity and lack of interpretability of the neural policies that govern the behavior
Externí odkaz:
http://arxiv.org/abs/2104.10219
Compared with the fixed-run designs, the sequential adaptive designs (SAD) are thought to be more efficient and effective. Efficient global optimization (EGO) is one of the most popular SAD methods for expensive black-box optimization problems. A wel
Externí odkaz:
http://arxiv.org/abs/2010.10698
Learning-enabled controllers used in cyber-physical systems (CPS) are known to be susceptible to adversarial attacks. Such attacks manifest as perturbations to the states generated by the controller's environment in response to its actions. We consid
Externí odkaz:
http://arxiv.org/abs/2006.06861
Despite the tremendous advances that have been made in the last decade on developing useful machine-learning applications, their wider adoption has been hindered by the lack of strong assurance guarantees that can be made about their behavior. In thi
Externí odkaz:
http://arxiv.org/abs/1907.07273