Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Rhinehart, Nicholas"'
How can a robot safely navigate around people exhibiting complex motion patterns? Reinforcement Learning (RL) or Deep RL (DRL) in simulation holds some promise, although much prior work relies on simulators that fail to precisely capture the nuances
Externí odkaz:
http://arxiv.org/abs/2410.10646
Autor:
Yang, Jiezhi, Desai, Khushi, Packer, Charles, Bhatia, Harshil, Rhinehart, Nicholas, McAllister, Rowan, Gonzalez, Joseph
We propose CARFF, a method for predicting future 3D scenes given past observations. Our method maps 2D ego-centric images to a distribution over plausible 3D latent scene configurations and predicts the evolution of hypothesized scenes through time.
Externí odkaz:
http://arxiv.org/abs/2401.18075
Autor:
Rhinehart, Nicholas, Wang, Jenny, Berseth, Glen, Co-Reyes, John D., Hafner, Danijar, Finn, Chelsea, Levine, Sergey
Humans and animals explore their environment and acquire useful skills even in the absence of clear goals, exhibiting intrinsic motivation. The study of intrinsic motivation in artificial agents is concerned with the following question: what is a goo
Externí odkaz:
http://arxiv.org/abs/2112.03899
Autor:
Dashora, Nitish, Shin, Daniel, Shah, Dhruv, Leopold, Henry, Fan, David, Agha-Mohammadi, Ali, Rhinehart, Nicholas, Levine, Sergey
Geometric methods for solving open-world off-road navigation tasks, by learning occupancy and metric maps, provide good generalization but can be brittle in outdoor environments that violate their assumptions (e.g., tall grass). Learning-based method
Externí odkaz:
http://arxiv.org/abs/2111.10948
Autor:
Fickinger, Arnaud, Jaques, Natasha, Parajuli, Samyak, Chang, Michael, Rhinehart, Nicholas, Berseth, Glen, Russell, Stuart, Levine, Sergey
Unsupervised reinforcement learning (RL) studies how to leverage environment statistics to learn useful behaviors without the cost of reward engineering. However, a central challenge in unsupervised RL is to extract behaviors that meaningfully affect
Externí odkaz:
http://arxiv.org/abs/2107.07394
Autor:
Rhinehart, Nicholas, He, Jeff, Packer, Charles, Wright, Matthew A., McAllister, Rowan, Gonzalez, Joseph E., Levine, Sergey
Humans have a remarkable ability to make decisions by accurately reasoning about future events, including the future behaviors and states of mind of other agents. Consider driving a car through a busy intersection: it is necessary to reason about the
Externí odkaz:
http://arxiv.org/abs/2104.10558
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments. At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory
Externí odkaz:
http://arxiv.org/abs/2104.05859
We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform. Learning provides an appealing alternative to conventional methods for robotic navigation: instead of rea
Externí odkaz:
http://arxiv.org/abs/2012.09812
Reinforcement learning provides a general framework for flexible decision making and control, but requires extensive data collection for each new task that an agent needs to learn. In other machine learning fields, such as natural language processing
Externí odkaz:
http://arxiv.org/abs/2011.10024
Autor:
Bharadhwaj, Homanga, Kumar, Aviral, Rhinehart, Nicholas, Levine, Sergey, Shkurti, Florian, Garg, Animesh
Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling tria
Externí odkaz:
http://arxiv.org/abs/2010.14497