Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Zhou, Aurick"'
Autor:
Seff, Ari, Cera, Brian, Chen, Dian, Ng, Mason, Zhou, Aurick, Nayakanti, Nigamaa, Refaat, Khaled S., Al-Rfou, Rami, Sapp, Benjamin
Reliable forecasting of the future behavior of road agents is a critical component to safe planning in autonomous vehicles. Here, we represent continuous trajectories as sequences of discrete motion tokens and cast multi-agent motion prediction as a
Externí odkaz:
http://arxiv.org/abs/2309.16534
Autor:
Nayakanti, Nigamaa, Al-Rfou, Rami, Zhou, Aurick, Goel, Kratarth, Refaat, Khaled S., Sapp, Benjamin
Motion forecasting for autonomous driving is a challenging task because complex driving scenarios result in a heterogeneous mix of static and dynamic inputs. It is an open problem how best to represent and fuse information about road geometry, lane c
Externí odkaz:
http://arxiv.org/abs/2207.05844
Autor:
Zhou, Aurick, Levine, Sergey
When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates. While improving the robustness of neural networks is one promising approach to mitigate this issue, an appe
Externí odkaz:
http://arxiv.org/abs/2109.12746
Autor:
Li, Kevin, Gupta, Abhishek, Reddy, Ashwin, Pong, Vitchyr, Zhou, Aurick, Yu, Justin, Levine, Sergey
Exploration in reinforcement learning is a challenging problem: in the worst case, the agent must search for high-reward states that could be hidden anywhere in the state space. Can we define a more tractable class of RL problems, where the agent is
Externí odkaz:
http://arxiv.org/abs/2107.07184
Autor:
Zhou, Aurick, Levine, Sergey
While deep neural networks provide good performance for a range of challenging tasks, calibration and uncertainty estimation remain major challenges, especially under distribution shift. In this paper, we propose the amortized conditional normalized
Externí odkaz:
http://arxiv.org/abs/2011.02696
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static dataset
Externí odkaz:
http://arxiv.org/abs/2006.04779
Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several ma
Externí odkaz:
http://arxiv.org/abs/1903.08254
Deep reinforcement learning (deep RL) holds the promise of automating the acquisition of complex controllers that can map sensory inputs directly to low-level actions. In the domain of robotic locomotion, deep RL could enable learning locomotion skil
Externí odkaz:
http://arxiv.org/abs/1812.11103
Autor:
Haarnoja, Tuomas, Zhou, Aurick, Hartikainen, Kristian, Tucker, George, Ha, Sehoon, Tan, Jie, Kumar, Vikash, Zhu, Henry, Gupta, Abhishek, Abbeel, Pieter, Levine, Sergey
Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity an
Externí odkaz:
http://arxiv.org/abs/1812.05905
Autor:
Haarnoja, Tuomas, Pong, Vitchyr, Zhou, Aurick, Dalal, Murtaza, Abbeel, Pieter, Levine, Sergey
Model-free deep reinforcement learning has been shown to exhibit good performance in domains ranging from video games to simulated robotic manipulation and locomotion. However, model-free methods are known to perform poorly when the interaction time
Externí odkaz:
http://arxiv.org/abs/1803.06773